Jan 23 14:04:11 crc systemd[1]: Starting Kubernetes Kubelet... Jan 23 14:04:12 crc restorecon[4690]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 14:04:12 crc restorecon[4690]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 23 14:04:12 crc restorecon[4690]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 23 14:04:13 crc kubenswrapper[4775]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 14:04:13 crc kubenswrapper[4775]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 23 14:04:13 crc kubenswrapper[4775]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 14:04:13 crc kubenswrapper[4775]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 14:04:13 crc kubenswrapper[4775]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 23 14:04:13 crc kubenswrapper[4775]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.491089 4775 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497305 4775 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497365 4775 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497375 4775 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497386 4775 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497396 4775 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497408 4775 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497417 4775 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497425 4775 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497432 4775 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497440 4775 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497449 4775 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497456 4775 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497464 4775 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497471 4775 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497479 4775 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497488 4775 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497495 4775 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497506 4775 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497515 4775 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497526 4775 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497539 4775 feature_gate.go:330] unrecognized feature gate: Example Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497547 4775 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497556 4775 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497565 4775 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497575 4775 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497584 4775 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497593 4775 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497601 4775 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497609 4775 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497617 4775 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497637 4775 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497647 4775 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497656 4775 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497664 4775 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497673 4775 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497681 4775 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497690 4775 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497697 4775 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497706 4775 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497714 4775 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497722 4775 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497730 4775 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497739 4775 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497747 4775 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497755 4775 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497763 4775 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497771 4775 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497779 4775 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497786 4775 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497795 4775 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497826 4775 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497834 4775 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497842 4775 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497850 4775 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497858 4775 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497866 4775 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497873 4775 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497882 4775 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497890 4775 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497898 4775 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497905 4775 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497913 4775 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497920 4775 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497930 4775 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497940 4775 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497949 4775 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497959 4775 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497968 4775 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497976 4775 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497985 4775 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.497992 4775 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498141 4775 flags.go:64] FLAG: --address="0.0.0.0" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498159 4775 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498174 4775 flags.go:64] FLAG: --anonymous-auth="true" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498186 4775 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498202 4775 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498212 4775 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498223 4775 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498235 4775 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498243 4775 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498252 4775 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498262 4775 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498273 4775 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498283 4775 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498291 4775 flags.go:64] FLAG: --cgroup-root="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498300 4775 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498309 4775 flags.go:64] FLAG: --client-ca-file="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498318 4775 flags.go:64] FLAG: --cloud-config="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498329 4775 flags.go:64] FLAG: --cloud-provider="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498341 4775 flags.go:64] FLAG: --cluster-dns="[]" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498351 4775 flags.go:64] FLAG: --cluster-domain="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498360 4775 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498369 4775 flags.go:64] FLAG: --config-dir="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498378 4775 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498388 4775 flags.go:64] FLAG: --container-log-max-files="5" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498400 4775 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498409 4775 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498418 4775 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498428 4775 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498438 4775 flags.go:64] FLAG: --contention-profiling="false" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498447 4775 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498456 4775 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498465 4775 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498474 4775 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498486 4775 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498495 4775 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498505 4775 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498513 4775 flags.go:64] FLAG: --enable-load-reader="false" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498522 4775 flags.go:64] FLAG: --enable-server="true" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498532 4775 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498545 4775 flags.go:64] FLAG: --event-burst="100" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498554 4775 flags.go:64] FLAG: --event-qps="50" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498563 4775 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498573 4775 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498582 4775 flags.go:64] FLAG: --eviction-hard="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498602 4775 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498611 4775 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498620 4775 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498631 4775 flags.go:64] FLAG: --eviction-soft="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498640 4775 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498651 4775 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498660 4775 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498670 4775 flags.go:64] FLAG: --experimental-mounter-path="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498679 4775 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498688 4775 flags.go:64] FLAG: --fail-swap-on="true" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498697 4775 flags.go:64] FLAG: --feature-gates="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498709 4775 flags.go:64] FLAG: --file-check-frequency="20s" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498718 4775 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498727 4775 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498736 4775 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498746 4775 flags.go:64] FLAG: --healthz-port="10248" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498756 4775 flags.go:64] FLAG: --help="false" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498765 4775 flags.go:64] FLAG: --hostname-override="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498773 4775 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498782 4775 flags.go:64] FLAG: --http-check-frequency="20s" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498791 4775 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498825 4775 flags.go:64] FLAG: --image-credential-provider-config="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498835 4775 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498843 4775 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498852 4775 flags.go:64] FLAG: --image-service-endpoint="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498861 4775 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498870 4775 flags.go:64] FLAG: --kube-api-burst="100" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498879 4775 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498888 4775 flags.go:64] FLAG: --kube-api-qps="50" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498897 4775 flags.go:64] FLAG: --kube-reserved="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498906 4775 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498915 4775 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498924 4775 flags.go:64] FLAG: --kubelet-cgroups="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498933 4775 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498942 4775 flags.go:64] FLAG: --lock-file="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498950 4775 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498960 4775 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498969 4775 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498984 4775 flags.go:64] FLAG: --log-json-split-stream="false" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.498994 4775 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499003 4775 flags.go:64] FLAG: --log-text-split-stream="false" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499012 4775 flags.go:64] FLAG: --logging-format="text" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499021 4775 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499031 4775 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499039 4775 flags.go:64] FLAG: --manifest-url="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499048 4775 flags.go:64] FLAG: --manifest-url-header="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499060 4775 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499070 4775 flags.go:64] FLAG: --max-open-files="1000000" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499080 4775 flags.go:64] FLAG: --max-pods="110" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499089 4775 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499098 4775 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499106 4775 flags.go:64] FLAG: --memory-manager-policy="None" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499116 4775 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499125 4775 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499134 4775 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499143 4775 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499162 4775 flags.go:64] FLAG: --node-status-max-images="50" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499172 4775 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499181 4775 flags.go:64] FLAG: --oom-score-adj="-999" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499191 4775 flags.go:64] FLAG: --pod-cidr="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499200 4775 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499212 4775 flags.go:64] FLAG: --pod-manifest-path="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499222 4775 flags.go:64] FLAG: --pod-max-pids="-1" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499231 4775 flags.go:64] FLAG: --pods-per-core="0" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499239 4775 flags.go:64] FLAG: --port="10250" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499248 4775 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499257 4775 flags.go:64] FLAG: --provider-id="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499266 4775 flags.go:64] FLAG: --qos-reserved="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499275 4775 flags.go:64] FLAG: --read-only-port="10255" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499284 4775 flags.go:64] FLAG: --register-node="true" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499294 4775 flags.go:64] FLAG: --register-schedulable="true" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499303 4775 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499317 4775 flags.go:64] FLAG: --registry-burst="10" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499326 4775 flags.go:64] FLAG: --registry-qps="5" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499335 4775 flags.go:64] FLAG: --reserved-cpus="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499345 4775 flags.go:64] FLAG: --reserved-memory="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499356 4775 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499365 4775 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499374 4775 flags.go:64] FLAG: --rotate-certificates="false" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499384 4775 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499393 4775 flags.go:64] FLAG: --runonce="false" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499403 4775 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499414 4775 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499423 4775 flags.go:64] FLAG: --seccomp-default="false" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499431 4775 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499440 4775 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499449 4775 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499458 4775 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499468 4775 flags.go:64] FLAG: --storage-driver-password="root" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499477 4775 flags.go:64] FLAG: --storage-driver-secure="false" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499486 4775 flags.go:64] FLAG: --storage-driver-table="stats" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499495 4775 flags.go:64] FLAG: --storage-driver-user="root" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499503 4775 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499513 4775 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499522 4775 flags.go:64] FLAG: --system-cgroups="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499531 4775 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499544 4775 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499553 4775 flags.go:64] FLAG: --tls-cert-file="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499563 4775 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499573 4775 flags.go:64] FLAG: --tls-min-version="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499581 4775 flags.go:64] FLAG: --tls-private-key-file="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499590 4775 flags.go:64] FLAG: --topology-manager-policy="none" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499602 4775 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499611 4775 flags.go:64] FLAG: --topology-manager-scope="container" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499620 4775 flags.go:64] FLAG: --v="2" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499631 4775 flags.go:64] FLAG: --version="false" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499642 4775 flags.go:64] FLAG: --vmodule="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499653 4775 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.499662 4775 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.499898 4775 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.499911 4775 feature_gate.go:330] unrecognized feature gate: Example Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.499921 4775 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.499930 4775 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.499939 4775 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.499948 4775 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.499957 4775 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.499966 4775 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.499974 4775 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.499981 4775 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.499997 4775 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500005 4775 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500013 4775 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500020 4775 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500028 4775 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500036 4775 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500044 4775 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500052 4775 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500059 4775 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500067 4775 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500075 4775 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500082 4775 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500090 4775 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500097 4775 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500105 4775 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500113 4775 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500121 4775 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500129 4775 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500136 4775 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500144 4775 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500152 4775 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500161 4775 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500168 4775 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500176 4775 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500183 4775 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500191 4775 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500199 4775 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500207 4775 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500216 4775 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500223 4775 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500231 4775 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500239 4775 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500252 4775 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500262 4775 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500270 4775 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500279 4775 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500286 4775 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500294 4775 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500302 4775 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500309 4775 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500317 4775 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500325 4775 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500333 4775 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500340 4775 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500347 4775 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500355 4775 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500363 4775 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500373 4775 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500382 4775 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500390 4775 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500398 4775 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500406 4775 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500414 4775 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500421 4775 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500429 4775 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500436 4775 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500444 4775 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500452 4775 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500460 4775 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500467 4775 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.500477 4775 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.500490 4775 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.511579 4775 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.511630 4775 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.511755 4775 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.511779 4775 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.511789 4775 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.511834 4775 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.511848 4775 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.511859 4775 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.511869 4775 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.511877 4775 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.511885 4775 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.511894 4775 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.511906 4775 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.511918 4775 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.511929 4775 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.511939 4775 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.511950 4775 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.511958 4775 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.511969 4775 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.511978 4775 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.511987 4775 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.511996 4775 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512005 4775 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512013 4775 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512021 4775 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512029 4775 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512037 4775 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512046 4775 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512054 4775 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512061 4775 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512069 4775 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512078 4775 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512086 4775 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512094 4775 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512101 4775 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512109 4775 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512118 4775 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512127 4775 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512135 4775 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512143 4775 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512150 4775 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512158 4775 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512166 4775 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512174 4775 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512181 4775 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512190 4775 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512197 4775 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512205 4775 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512213 4775 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512222 4775 feature_gate.go:330] unrecognized feature gate: Example Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512229 4775 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512237 4775 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512245 4775 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512253 4775 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512261 4775 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512269 4775 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512277 4775 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512284 4775 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512292 4775 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512299 4775 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512307 4775 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512314 4775 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512322 4775 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512330 4775 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512337 4775 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512345 4775 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512353 4775 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512360 4775 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512371 4775 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512380 4775 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512388 4775 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512396 4775 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512406 4775 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.512419 4775 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512638 4775 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512651 4775 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512659 4775 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512668 4775 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512676 4775 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512684 4775 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512693 4775 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512700 4775 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512708 4775 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512717 4775 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512727 4775 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512738 4775 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512746 4775 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512754 4775 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512761 4775 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512772 4775 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512781 4775 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512789 4775 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512796 4775 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512836 4775 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512847 4775 feature_gate.go:330] unrecognized feature gate: Example Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512857 4775 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512868 4775 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512879 4775 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512889 4775 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512897 4775 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512907 4775 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512915 4775 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512923 4775 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512931 4775 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512939 4775 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512947 4775 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512956 4775 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512966 4775 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512975 4775 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512984 4775 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.512992 4775 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.513000 4775 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.513008 4775 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.513015 4775 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.513023 4775 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.513031 4775 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.513039 4775 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.513046 4775 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.513054 4775 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.513061 4775 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.513069 4775 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.513077 4775 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.513085 4775 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.513093 4775 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.513100 4775 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.513108 4775 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.513116 4775 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.513124 4775 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.513131 4775 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.513139 4775 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.513147 4775 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.513155 4775 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.513163 4775 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.513170 4775 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.513178 4775 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.513186 4775 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.513193 4775 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.513201 4775 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.513209 4775 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.513242 4775 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.513253 4775 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.513265 4775 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.513274 4775 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.513283 4775 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.513292 4775 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.513305 4775 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.513556 4775 server.go:940] "Client rotation is on, will bootstrap in background" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.518467 4775 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.518599 4775 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.519499 4775 server.go:997] "Starting client certificate rotation" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.519550 4775 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.519777 4775 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-18 20:25:01.78222372 +0000 UTC Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.519908 4775 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.532003 4775 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 23 14:04:13 crc kubenswrapper[4775]: E0123 14:04:13.533994 4775 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.536326 4775 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.548867 4775 log.go:25] "Validated CRI v1 runtime API" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.583552 4775 log.go:25] "Validated CRI v1 image API" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.586795 4775 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.592283 4775 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-23-13-59-50-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.592462 4775 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.630063 4775 manager.go:217] Machine: {Timestamp:2026-01-23 14:04:13.623530364 +0000 UTC m=+0.618359194 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:8a5d5c8e-ecf7-49d1-850c-74e085cfc75c BootID:a063d3a2-7692-443a-9621-c3db4caa1aba Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:ec:82:1b Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:ec:82:1b Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:b2:34:84 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:2f:12:8b Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:6f:c3:cb Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:29:2e:0c Speed:-1 Mtu:1496} {Name:eth10 MacAddress:66:7b:3b:f7:7e:c9 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:4e:8c:cb:db:e2:18 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.630595 4775 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.631093 4775 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.633842 4775 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.634266 4775 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.634344 4775 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.634757 4775 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.634781 4775 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.635275 4775 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.635336 4775 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.635654 4775 state_mem.go:36] "Initialized new in-memory state store" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.635869 4775 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.637221 4775 kubelet.go:418] "Attempting to sync node with API server" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.637270 4775 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.637299 4775 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.637323 4775 kubelet.go:324] "Adding apiserver pod source" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.637346 4775 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.640890 4775 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.641731 4775 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.643247 4775 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.644321 4775 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.644487 4775 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:04:13 crc kubenswrapper[4775]: E0123 14:04:13.644532 4775 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:04:13 crc kubenswrapper[4775]: E0123 14:04:13.644581 4775 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.644977 4775 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.645032 4775 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.645050 4775 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.645067 4775 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.645095 4775 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.645111 4775 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.645127 4775 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.645153 4775 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.645173 4775 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.645194 4775 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.645217 4775 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.645233 4775 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.648448 4775 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.649394 4775 server.go:1280] "Started kubelet" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.650057 4775 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.650557 4775 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.651077 4775 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.651841 4775 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:04:13 crc systemd[1]: Started Kubernetes Kubelet. Jan 23 14:04:13 crc kubenswrapper[4775]: E0123 14:04:13.652375 4775 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.177:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188d6128238a7d9f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 14:04:13.649337759 +0000 UTC m=+0.644166539,LastTimestamp:2026-01-23 14:04:13.649337759 +0000 UTC m=+0.644166539,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.653494 4775 server.go:460] "Adding debug handlers to kubelet server" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.654611 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.654645 4775 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.655079 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 01:01:07.04527978 +0000 UTC Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.655158 4775 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.655198 4775 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 23 14:04:13 crc kubenswrapper[4775]: E0123 14:04:13.655357 4775 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.655512 4775 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.656627 4775 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:04:13 crc kubenswrapper[4775]: E0123 14:04:13.656757 4775 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.657347 4775 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.657411 4775 factory.go:55] Registering systemd factory Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.657435 4775 factory.go:221] Registration of the systemd container factory successfully Jan 23 14:04:13 crc kubenswrapper[4775]: E0123 14:04:13.657711 4775 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="200ms" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.658795 4775 factory.go:153] Registering CRI-O factory Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.658855 4775 factory.go:221] Registration of the crio container factory successfully Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.658884 4775 factory.go:103] Registering Raw factory Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.658901 4775 manager.go:1196] Started watching for new ooms in manager Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.660238 4775 manager.go:319] Starting recovery of all containers Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.673973 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.674158 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.674197 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.674228 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.674262 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.674290 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.674317 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.674344 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.674375 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.674403 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.674433 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.674462 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.674490 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.674523 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.674558 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.674588 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.674616 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.674648 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.674678 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.674705 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.674836 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.674884 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.674918 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.674951 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.674983 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.675012 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.675046 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.675077 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.675106 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.675139 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.675169 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.675200 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.675226 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.675258 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.675287 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.675315 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.675342 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.675659 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.675714 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.675934 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.675992 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.676024 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.676056 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.676085 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.676114 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.676144 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.676174 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.676203 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.676236 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.676267 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.676297 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.676326 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.676366 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.677515 4775 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.677586 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.677621 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.677655 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.677685 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.677713 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.677738 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.677767 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.677792 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.677889 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.677921 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.677951 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.677981 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.678011 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.678044 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.678075 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.678106 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.678137 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.678167 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.678195 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.678225 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.678253 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.678305 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.678334 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.678364 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.678394 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.678423 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.678448 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.678473 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.678498 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.678523 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.678549 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.678574 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.678601 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.678628 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.678664 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.678694 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.678723 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.678754 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.678783 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.678850 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.678885 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.678916 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.678945 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.678977 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.679006 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.679033 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.679074 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.679107 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.679233 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.679256 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.679376 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.679399 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.679417 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.679435 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.679457 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680052 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680135 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680162 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680181 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680202 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680222 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680238 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680255 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680270 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680285 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680299 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680314 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680331 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680346 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680359 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680373 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680389 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680403 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680419 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680435 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680449 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680462 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680476 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680493 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680507 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680522 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680538 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680556 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680571 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680586 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680597 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680612 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680629 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680643 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680656 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680671 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680683 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680698 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680712 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680729 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680745 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680758 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680773 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680788 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680866 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680880 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680894 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680907 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680924 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680939 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680953 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680965 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680981 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.680997 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681011 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681025 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681038 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681052 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681067 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681079 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681092 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681105 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681117 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681129 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681144 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681157 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681172 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681188 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681202 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681216 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681230 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681241 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681253 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681269 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681281 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681301 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681314 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681328 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681344 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681375 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681388 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681402 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681415 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681428 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681440 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681454 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681468 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681480 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681515 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681531 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681545 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681558 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681571 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681584 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681597 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681615 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681631 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681646 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681662 4775 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681675 4775 reconstruct.go:97] "Volume reconstruction finished" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.681686 4775 reconciler.go:26] "Reconciler: start to sync state" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.696232 4775 manager.go:324] Recovery completed Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.710173 4775 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.711055 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.712604 4775 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.712661 4775 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.712695 4775 kubelet.go:2335] "Starting kubelet main sync loop" Jan 23 14:04:13 crc kubenswrapper[4775]: E0123 14:04:13.712748 4775 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 14:04:13 crc kubenswrapper[4775]: W0123 14:04:13.714220 4775 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.714291 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.714329 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:13 crc kubenswrapper[4775]: E0123 14:04:13.714327 4775 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.714345 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.715086 4775 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.715110 4775 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.715141 4775 state_mem.go:36] "Initialized new in-memory state store" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.733490 4775 policy_none.go:49] "None policy: Start" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.734488 4775 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.734527 4775 state_mem.go:35] "Initializing new in-memory state store" Jan 23 14:04:13 crc kubenswrapper[4775]: E0123 14:04:13.755696 4775 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.794504 4775 manager.go:334] "Starting Device Plugin manager" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.794587 4775 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.795350 4775 server.go:79] "Starting device plugin registration server" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.795846 4775 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.795863 4775 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.796151 4775 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.796268 4775 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.796284 4775 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 14:04:13 crc kubenswrapper[4775]: E0123 14:04:13.806843 4775 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.813089 4775 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.813201 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.814498 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.814533 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.814546 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.814668 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.815593 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.815637 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.815652 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.816596 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.816641 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.816696 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.816856 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.816940 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.817563 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.817633 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.817659 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.817975 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.818132 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.818201 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.819220 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.819261 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.819271 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.819375 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.819398 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.819410 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.819502 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.819542 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.819561 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.819993 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.820017 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.820028 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.820189 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.820610 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.820657 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.820752 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.820782 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.820794 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.820966 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.820999 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.822868 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.822890 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.822901 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.822885 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.823016 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.823033 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:13 crc kubenswrapper[4775]: E0123 14:04:13.858568 4775 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="400ms" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.884149 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.884226 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.884268 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.884295 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.884317 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.884438 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.884520 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.884601 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.884698 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.884768 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.884884 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.884918 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.884939 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.884965 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.885005 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.895943 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.897333 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.897373 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.897384 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.897410 4775 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 14:04:13 crc kubenswrapper[4775]: E0123 14:04:13.898059 4775 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.177:6443: connect: connection refused" node="crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.986570 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.986688 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.986716 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.986738 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.986758 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.986780 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.986832 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.986900 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.986940 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.986944 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.986999 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.987008 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.986942 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.986950 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.986964 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.987065 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.987085 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.987041 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.987006 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.987011 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.987156 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.987175 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.987190 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.987231 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.987260 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.987297 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.987353 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.987396 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.987440 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 14:04:13 crc kubenswrapper[4775]: I0123 14:04:13.987367 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.098910 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.100772 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.100850 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.100863 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.100892 4775 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 14:04:14 crc kubenswrapper[4775]: E0123 14:04:14.101496 4775 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.177:6443: connect: connection refused" node="crc" Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.180531 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.190029 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.195605 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.222240 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:04:14 crc kubenswrapper[4775]: W0123 14:04:14.226909 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-f7ad19344ed7efe540ab7800a8701bf7dee5f186207840c265d80dae8dac1e96 WatchSource:0}: Error finding container f7ad19344ed7efe540ab7800a8701bf7dee5f186207840c265d80dae8dac1e96: Status 404 returned error can't find the container with id f7ad19344ed7efe540ab7800a8701bf7dee5f186207840c265d80dae8dac1e96 Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.229355 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 23 14:04:14 crc kubenswrapper[4775]: W0123 14:04:14.229604 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-bc5b73b01edb996471d5199b18fd1b0355f240ef239a8fb10116ce7c98c1e00d WatchSource:0}: Error finding container bc5b73b01edb996471d5199b18fd1b0355f240ef239a8fb10116ce7c98c1e00d: Status 404 returned error can't find the container with id bc5b73b01edb996471d5199b18fd1b0355f240ef239a8fb10116ce7c98c1e00d Jan 23 14:04:14 crc kubenswrapper[4775]: W0123 14:04:14.241099 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-6a410fbbd85bb4accc91cd36d93cd297bb3631b04a1de273984eb131c5ee5997 WatchSource:0}: Error finding container 6a410fbbd85bb4accc91cd36d93cd297bb3631b04a1de273984eb131c5ee5997: Status 404 returned error can't find the container with id 6a410fbbd85bb4accc91cd36d93cd297bb3631b04a1de273984eb131c5ee5997 Jan 23 14:04:14 crc kubenswrapper[4775]: W0123 14:04:14.250636 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-6673fb0332406ddf2f95a1dac18fd32fb2ee69db85acc117950dfee756919cb3 WatchSource:0}: Error finding container 6673fb0332406ddf2f95a1dac18fd32fb2ee69db85acc117950dfee756919cb3: Status 404 returned error can't find the container with id 6673fb0332406ddf2f95a1dac18fd32fb2ee69db85acc117950dfee756919cb3 Jan 23 14:04:14 crc kubenswrapper[4775]: E0123 14:04:14.260610 4775 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="800ms" Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.501884 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.504078 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.504478 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.504499 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.504543 4775 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 14:04:14 crc kubenswrapper[4775]: E0123 14:04:14.505215 4775 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.177:6443: connect: connection refused" node="crc" Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.653235 4775 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.655268 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 00:01:01.717174157 +0000 UTC Jan 23 14:04:14 crc kubenswrapper[4775]: W0123 14:04:14.699701 4775 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:04:14 crc kubenswrapper[4775]: E0123 14:04:14.699870 4775 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.720316 4775 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="cf0bf3bc741e6d2b5e451b53aec1f510f437f076819f0539f51621db401cb64f" exitCode=0 Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.720445 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"cf0bf3bc741e6d2b5e451b53aec1f510f437f076819f0539f51621db401cb64f"} Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.720709 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"f4094c65d60e4d1250e2959cf6dd7f639e2295e6fc0dfe826353af8a5e0f6143"} Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.720958 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.723043 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.723109 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.723171 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.725446 4775 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="cab8f4130435939b220e9c48430b269cfd8f87485157504a5a29f581ff33468c" exitCode=0 Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.725586 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"cab8f4130435939b220e9c48430b269cfd8f87485157504a5a29f581ff33468c"} Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.725647 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f7ad19344ed7efe540ab7800a8701bf7dee5f186207840c265d80dae8dac1e96"} Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.725796 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.729293 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.729331 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.729350 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.731422 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212"} Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.731483 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bc5b73b01edb996471d5199b18fd1b0355f240ef239a8fb10116ce7c98c1e00d"} Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.733596 4775 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3" exitCode=0 Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.733672 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3"} Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.733694 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6673fb0332406ddf2f95a1dac18fd32fb2ee69db85acc117950dfee756919cb3"} Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.733852 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.734925 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.734955 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.734968 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.735962 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53"} Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.736015 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6a410fbbd85bb4accc91cd36d93cd297bb3631b04a1de273984eb131c5ee5997"} Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.736123 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.737787 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.737828 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.737840 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.740103 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.741089 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.741121 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:14 crc kubenswrapper[4775]: I0123 14:04:14.741132 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:14 crc kubenswrapper[4775]: W0123 14:04:14.772603 4775 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:04:14 crc kubenswrapper[4775]: E0123 14:04:14.772723 4775 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:04:14 crc kubenswrapper[4775]: W0123 14:04:14.974874 4775 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:04:14 crc kubenswrapper[4775]: E0123 14:04:14.974985 4775 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:04:15 crc kubenswrapper[4775]: W0123 14:04:15.014584 4775 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:04:15 crc kubenswrapper[4775]: E0123 14:04:15.014774 4775 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:04:15 crc kubenswrapper[4775]: E0123 14:04:15.061612 4775 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="1.6s" Jan 23 14:04:15 crc kubenswrapper[4775]: I0123 14:04:15.305569 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:15 crc kubenswrapper[4775]: I0123 14:04:15.307003 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:15 crc kubenswrapper[4775]: I0123 14:04:15.307033 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:15 crc kubenswrapper[4775]: I0123 14:04:15.307044 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:15 crc kubenswrapper[4775]: I0123 14:04:15.307064 4775 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 14:04:15 crc kubenswrapper[4775]: E0123 14:04:15.307972 4775 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.177:6443: connect: connection refused" node="crc" Jan 23 14:04:15 crc kubenswrapper[4775]: I0123 14:04:15.638752 4775 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 23 14:04:15 crc kubenswrapper[4775]: E0123 14:04:15.640187 4775 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:04:15 crc kubenswrapper[4775]: I0123 14:04:15.653454 4775 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:04:15 crc kubenswrapper[4775]: I0123 14:04:15.655505 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 00:45:22.009507797 +0000 UTC Jan 23 14:04:15 crc kubenswrapper[4775]: I0123 14:04:15.740245 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"6b281d05f695b9f070f8a73110e3b4ea722b237b9df9a31a80b787bd7ea51fb8"} Jan 23 14:04:15 crc kubenswrapper[4775]: I0123 14:04:15.740499 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:15 crc kubenswrapper[4775]: I0123 14:04:15.741490 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:15 crc kubenswrapper[4775]: I0123 14:04:15.741523 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:15 crc kubenswrapper[4775]: I0123 14:04:15.741533 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:15 crc kubenswrapper[4775]: I0123 14:04:15.742892 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f3e880aa503bbce5a53073f7f735d1defcde092982f39958cd58020b2139b7f5"} Jan 23 14:04:15 crc kubenswrapper[4775]: I0123 14:04:15.742960 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"41b0811b85f5245c0352225af50738ebaa72c1e52a2940ee42f5bc99218313ac"} Jan 23 14:04:15 crc kubenswrapper[4775]: I0123 14:04:15.742989 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"3301338273f633b6c32caed6b35db93841743e57f219115ae7c32e16fe4683f0"} Jan 23 14:04:15 crc kubenswrapper[4775]: I0123 14:04:15.743123 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:15 crc kubenswrapper[4775]: I0123 14:04:15.744084 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:15 crc kubenswrapper[4775]: I0123 14:04:15.744149 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:15 crc kubenswrapper[4775]: I0123 14:04:15.744170 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:15 crc kubenswrapper[4775]: I0123 14:04:15.745076 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df"} Jan 23 14:04:15 crc kubenswrapper[4775]: I0123 14:04:15.745098 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702"} Jan 23 14:04:15 crc kubenswrapper[4775]: I0123 14:04:15.745110 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50"} Jan 23 14:04:15 crc kubenswrapper[4775]: I0123 14:04:15.745177 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:15 crc kubenswrapper[4775]: I0123 14:04:15.745884 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:15 crc kubenswrapper[4775]: I0123 14:04:15.745913 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:15 crc kubenswrapper[4775]: I0123 14:04:15.745926 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:15 crc kubenswrapper[4775]: I0123 14:04:15.747082 4775 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b" exitCode=0 Jan 23 14:04:15 crc kubenswrapper[4775]: I0123 14:04:15.747184 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b"} Jan 23 14:04:15 crc kubenswrapper[4775]: I0123 14:04:15.747494 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:15 crc kubenswrapper[4775]: I0123 14:04:15.748625 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:15 crc kubenswrapper[4775]: I0123 14:04:15.748655 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:15 crc kubenswrapper[4775]: I0123 14:04:15.748671 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:15 crc kubenswrapper[4775]: I0123 14:04:15.750741 4775 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53" exitCode=0 Jan 23 14:04:15 crc kubenswrapper[4775]: I0123 14:04:15.750786 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53"} Jan 23 14:04:15 crc kubenswrapper[4775]: I0123 14:04:15.750831 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690"} Jan 23 14:04:15 crc kubenswrapper[4775]: I0123 14:04:15.750845 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185"} Jan 23 14:04:15 crc kubenswrapper[4775]: I0123 14:04:15.750856 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca"} Jan 23 14:04:16 crc kubenswrapper[4775]: I0123 14:04:16.656702 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 02:45:11.173070672 +0000 UTC Jan 23 14:04:16 crc kubenswrapper[4775]: I0123 14:04:16.756978 4775 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986" exitCode=0 Jan 23 14:04:16 crc kubenswrapper[4775]: I0123 14:04:16.757073 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986"} Jan 23 14:04:16 crc kubenswrapper[4775]: I0123 14:04:16.757259 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:16 crc kubenswrapper[4775]: I0123 14:04:16.758600 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:16 crc kubenswrapper[4775]: I0123 14:04:16.758660 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:16 crc kubenswrapper[4775]: I0123 14:04:16.758682 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:16 crc kubenswrapper[4775]: I0123 14:04:16.762256 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:16 crc kubenswrapper[4775]: I0123 14:04:16.762649 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:16 crc kubenswrapper[4775]: I0123 14:04:16.762994 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2"} Jan 23 14:04:16 crc kubenswrapper[4775]: I0123 14:04:16.763032 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792"} Jan 23 14:04:16 crc kubenswrapper[4775]: I0123 14:04:16.763559 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:16 crc kubenswrapper[4775]: I0123 14:04:16.763600 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:16 crc kubenswrapper[4775]: I0123 14:04:16.763617 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:16 crc kubenswrapper[4775]: I0123 14:04:16.764292 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:16 crc kubenswrapper[4775]: I0123 14:04:16.764345 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:16 crc kubenswrapper[4775]: I0123 14:04:16.764373 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:16 crc kubenswrapper[4775]: I0123 14:04:16.908588 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:16 crc kubenswrapper[4775]: I0123 14:04:16.910213 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:16 crc kubenswrapper[4775]: I0123 14:04:16.910249 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:16 crc kubenswrapper[4775]: I0123 14:04:16.910261 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:16 crc kubenswrapper[4775]: I0123 14:04:16.910286 4775 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 14:04:17 crc kubenswrapper[4775]: I0123 14:04:17.021033 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 14:04:17 crc kubenswrapper[4775]: I0123 14:04:17.034733 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 14:04:17 crc kubenswrapper[4775]: I0123 14:04:17.446765 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:04:17 crc kubenswrapper[4775]: I0123 14:04:17.657357 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 19:15:32.702913286 +0000 UTC Jan 23 14:04:17 crc kubenswrapper[4775]: I0123 14:04:17.768083 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842"} Jan 23 14:04:17 crc kubenswrapper[4775]: I0123 14:04:17.768669 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716"} Jan 23 14:04:17 crc kubenswrapper[4775]: I0123 14:04:17.768696 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1"} Jan 23 14:04:17 crc kubenswrapper[4775]: I0123 14:04:17.768402 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:17 crc kubenswrapper[4775]: I0123 14:04:17.768151 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:17 crc kubenswrapper[4775]: I0123 14:04:17.770187 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:17 crc kubenswrapper[4775]: I0123 14:04:17.770227 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:17 crc kubenswrapper[4775]: I0123 14:04:17.770239 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:17 crc kubenswrapper[4775]: I0123 14:04:17.770285 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:17 crc kubenswrapper[4775]: I0123 14:04:17.770308 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:17 crc kubenswrapper[4775]: I0123 14:04:17.770319 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:18 crc kubenswrapper[4775]: I0123 14:04:18.658509 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 12:47:59.596536846 +0000 UTC Jan 23 14:04:18 crc kubenswrapper[4775]: I0123 14:04:18.780137 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:18 crc kubenswrapper[4775]: I0123 14:04:18.780430 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c"} Jan 23 14:04:18 crc kubenswrapper[4775]: I0123 14:04:18.780507 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687"} Jan 23 14:04:18 crc kubenswrapper[4775]: I0123 14:04:18.780546 4775 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 14:04:18 crc kubenswrapper[4775]: I0123 14:04:18.780628 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:18 crc kubenswrapper[4775]: I0123 14:04:18.780683 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:18 crc kubenswrapper[4775]: I0123 14:04:18.781878 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:18 crc kubenswrapper[4775]: I0123 14:04:18.781960 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:18 crc kubenswrapper[4775]: I0123 14:04:18.781991 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:18 crc kubenswrapper[4775]: I0123 14:04:18.782240 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:18 crc kubenswrapper[4775]: I0123 14:04:18.782273 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:18 crc kubenswrapper[4775]: I0123 14:04:18.782292 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:18 crc kubenswrapper[4775]: I0123 14:04:18.782563 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:18 crc kubenswrapper[4775]: I0123 14:04:18.782643 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:18 crc kubenswrapper[4775]: I0123 14:04:18.782662 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:18 crc kubenswrapper[4775]: I0123 14:04:18.810297 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:04:19 crc kubenswrapper[4775]: I0123 14:04:19.425331 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 14:04:19 crc kubenswrapper[4775]: I0123 14:04:19.425495 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:19 crc kubenswrapper[4775]: I0123 14:04:19.426755 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:19 crc kubenswrapper[4775]: I0123 14:04:19.426847 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:19 crc kubenswrapper[4775]: I0123 14:04:19.426865 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:19 crc kubenswrapper[4775]: I0123 14:04:19.659649 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 11:42:40.119303822 +0000 UTC Jan 23 14:04:19 crc kubenswrapper[4775]: I0123 14:04:19.783544 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:19 crc kubenswrapper[4775]: I0123 14:04:19.783739 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:19 crc kubenswrapper[4775]: I0123 14:04:19.784706 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:19 crc kubenswrapper[4775]: I0123 14:04:19.784784 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:19 crc kubenswrapper[4775]: I0123 14:04:19.784852 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:19 crc kubenswrapper[4775]: I0123 14:04:19.785377 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:19 crc kubenswrapper[4775]: I0123 14:04:19.785465 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:19 crc kubenswrapper[4775]: I0123 14:04:19.785492 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:19 crc kubenswrapper[4775]: I0123 14:04:19.855576 4775 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 23 14:04:20 crc kubenswrapper[4775]: I0123 14:04:20.660494 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 08:08:46.723275326 +0000 UTC Jan 23 14:04:20 crc kubenswrapper[4775]: I0123 14:04:20.960606 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:04:20 crc kubenswrapper[4775]: I0123 14:04:20.961007 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:20 crc kubenswrapper[4775]: I0123 14:04:20.962973 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:20 crc kubenswrapper[4775]: I0123 14:04:20.963031 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:20 crc kubenswrapper[4775]: I0123 14:04:20.963050 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:20 crc kubenswrapper[4775]: I0123 14:04:20.967780 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 14:04:20 crc kubenswrapper[4775]: I0123 14:04:20.968123 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:20 crc kubenswrapper[4775]: I0123 14:04:20.969510 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:20 crc kubenswrapper[4775]: I0123 14:04:20.969642 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:20 crc kubenswrapper[4775]: I0123 14:04:20.969732 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:21 crc kubenswrapper[4775]: I0123 14:04:21.143010 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 23 14:04:21 crc kubenswrapper[4775]: I0123 14:04:21.143609 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:21 crc kubenswrapper[4775]: I0123 14:04:21.145579 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:21 crc kubenswrapper[4775]: I0123 14:04:21.146058 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:21 crc kubenswrapper[4775]: I0123 14:04:21.146150 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:21 crc kubenswrapper[4775]: I0123 14:04:21.488580 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 14:04:21 crc kubenswrapper[4775]: I0123 14:04:21.641127 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 23 14:04:21 crc kubenswrapper[4775]: I0123 14:04:21.661383 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 11:58:34.780396938 +0000 UTC Jan 23 14:04:21 crc kubenswrapper[4775]: I0123 14:04:21.789982 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:21 crc kubenswrapper[4775]: I0123 14:04:21.789998 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:21 crc kubenswrapper[4775]: I0123 14:04:21.792149 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:21 crc kubenswrapper[4775]: I0123 14:04:21.792329 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:21 crc kubenswrapper[4775]: I0123 14:04:21.792457 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:21 crc kubenswrapper[4775]: I0123 14:04:21.792544 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:21 crc kubenswrapper[4775]: I0123 14:04:21.792641 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:21 crc kubenswrapper[4775]: I0123 14:04:21.792660 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:22 crc kubenswrapper[4775]: I0123 14:04:22.662032 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 15:12:26.370060671 +0000 UTC Jan 23 14:04:23 crc kubenswrapper[4775]: I0123 14:04:23.662779 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 03:35:18.835285167 +0000 UTC Jan 23 14:04:23 crc kubenswrapper[4775]: E0123 14:04:23.807062 4775 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 23 14:04:23 crc kubenswrapper[4775]: I0123 14:04:23.980516 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 14:04:23 crc kubenswrapper[4775]: I0123 14:04:23.981052 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:23 crc kubenswrapper[4775]: I0123 14:04:23.982487 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:23 crc kubenswrapper[4775]: I0123 14:04:23.982529 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:23 crc kubenswrapper[4775]: I0123 14:04:23.982543 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:24 crc kubenswrapper[4775]: I0123 14:04:24.663315 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 03:53:37.863093463 +0000 UTC Jan 23 14:04:25 crc kubenswrapper[4775]: I0123 14:04:25.663555 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 15:44:28.805238245 +0000 UTC Jan 23 14:04:26 crc kubenswrapper[4775]: I0123 14:04:26.533716 4775 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 23 14:04:26 crc kubenswrapper[4775]: I0123 14:04:26.533774 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 23 14:04:26 crc kubenswrapper[4775]: I0123 14:04:26.540916 4775 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 23 14:04:26 crc kubenswrapper[4775]: I0123 14:04:26.541039 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 23 14:04:26 crc kubenswrapper[4775]: I0123 14:04:26.664379 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 03:21:44.033870084 +0000 UTC Jan 23 14:04:26 crc kubenswrapper[4775]: I0123 14:04:26.980344 4775 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 14:04:26 crc kubenswrapper[4775]: I0123 14:04:26.980464 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 14:04:27 crc kubenswrapper[4775]: I0123 14:04:27.665567 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 04:52:21.195248319 +0000 UTC Jan 23 14:04:28 crc kubenswrapper[4775]: I0123 14:04:28.665792 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 11:32:49.786394494 +0000 UTC Jan 23 14:04:28 crc kubenswrapper[4775]: I0123 14:04:28.819890 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:04:28 crc kubenswrapper[4775]: I0123 14:04:28.820139 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:28 crc kubenswrapper[4775]: I0123 14:04:28.821709 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:28 crc kubenswrapper[4775]: I0123 14:04:28.821772 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:28 crc kubenswrapper[4775]: I0123 14:04:28.821839 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:28 crc kubenswrapper[4775]: I0123 14:04:28.825179 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:04:29 crc kubenswrapper[4775]: I0123 14:04:29.666386 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 21:30:30.421538955 +0000 UTC Jan 23 14:04:29 crc kubenswrapper[4775]: I0123 14:04:29.813520 4775 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 14:04:29 crc kubenswrapper[4775]: I0123 14:04:29.813576 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:29 crc kubenswrapper[4775]: I0123 14:04:29.815260 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:29 crc kubenswrapper[4775]: I0123 14:04:29.815296 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:29 crc kubenswrapper[4775]: I0123 14:04:29.815307 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:30 crc kubenswrapper[4775]: I0123 14:04:30.666789 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 21:11:21.201431058 +0000 UTC Jan 23 14:04:30 crc kubenswrapper[4775]: I0123 14:04:30.976615 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 14:04:30 crc kubenswrapper[4775]: I0123 14:04:30.976833 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:30 crc kubenswrapper[4775]: I0123 14:04:30.978205 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:30 crc kubenswrapper[4775]: I0123 14:04:30.978247 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:30 crc kubenswrapper[4775]: I0123 14:04:30.978261 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.219061 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.219287 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.220530 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.220583 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.220599 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.234479 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 23 14:04:31 crc kubenswrapper[4775]: E0123 14:04:31.519930 4775 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.522753 4775 trace.go:236] Trace[1851784562]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 14:04:17.060) (total time: 14462ms): Jan 23 14:04:31 crc kubenswrapper[4775]: Trace[1851784562]: ---"Objects listed" error: 14462ms (14:04:31.522) Jan 23 14:04:31 crc kubenswrapper[4775]: Trace[1851784562]: [14.462153302s] [14.462153302s] END Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.522785 4775 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.523006 4775 trace.go:236] Trace[58725198]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 14:04:17.885) (total time: 13637ms): Jan 23 14:04:31 crc kubenswrapper[4775]: Trace[58725198]: ---"Objects listed" error: 13637ms (14:04:31.522) Jan 23 14:04:31 crc kubenswrapper[4775]: Trace[58725198]: [13.637078752s] [13.637078752s] END Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.523020 4775 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 23 14:04:31 crc kubenswrapper[4775]: E0123 14:04:31.525195 4775 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.529957 4775 trace.go:236] Trace[1499191215]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 14:04:16.703) (total time: 14826ms): Jan 23 14:04:31 crc kubenswrapper[4775]: Trace[1499191215]: ---"Objects listed" error: 14826ms (14:04:31.529) Jan 23 14:04:31 crc kubenswrapper[4775]: Trace[1499191215]: [14.82688907s] [14.82688907s] END Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.529978 4775 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.533178 4775 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.541241 4775 trace.go:236] Trace[1454904241]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 14:04:17.580) (total time: 13960ms): Jan 23 14:04:31 crc kubenswrapper[4775]: Trace[1454904241]: ---"Objects listed" error: 13960ms (14:04:31.541) Jan 23 14:04:31 crc kubenswrapper[4775]: Trace[1454904241]: [13.96055475s] [13.96055475s] END Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.541302 4775 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.551652 4775 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.558523 4775 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:45984->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.558632 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:45984->192.168.126.11:17697: read: connection reset by peer" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.559182 4775 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.559238 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.559319 4775 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:46288->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.559449 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:46288->192.168.126.11:17697: read: connection reset by peer" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.648770 4775 apiserver.go:52] "Watching apiserver" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.651195 4775 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.651554 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.651908 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.652021 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.652021 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.652070 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:04:31 crc kubenswrapper[4775]: E0123 14:04:31.652247 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:04:31 crc kubenswrapper[4775]: E0123 14:04:31.652312 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.652631 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:31 crc kubenswrapper[4775]: E0123 14:04:31.652672 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.652699 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.656315 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.656638 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.656646 4775 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.656872 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.658023 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.658114 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.658266 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.658430 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.659661 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.662836 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.667499 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 04:05:44.204410091 +0000 UTC Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.702140 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.716145 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.734643 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.735321 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.735232 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.735416 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.735893 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.735981 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.736453 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.736505 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.736554 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.736581 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.737033 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.738941 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.736980 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.738989 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.737384 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.739011 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.739169 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.739206 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.739325 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.739563 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.739645 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.739953 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.739987 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.740104 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.739677 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.740201 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.740229 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.740513 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.740937 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.741025 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.741463 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.741522 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.741164 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.741583 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.741611 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.741652 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.741678 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.741700 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.741724 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.741746 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.741769 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.741820 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.741858 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.741903 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.741936 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.741968 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742070 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742118 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742142 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742166 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742189 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742212 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742234 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742262 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742286 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742310 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742341 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742369 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742394 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742401 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742443 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742466 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742485 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742502 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742519 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742535 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742551 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742566 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742585 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742601 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742651 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742668 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742684 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742691 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742701 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742742 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742768 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742791 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742833 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742909 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742934 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742964 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742995 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743031 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743066 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743101 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743134 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743157 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743179 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743200 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743220 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743241 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743262 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743283 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743305 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743326 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743350 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743373 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743397 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743419 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743444 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743472 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743494 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743538 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743565 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743588 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743609 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743631 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743654 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743769 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743793 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743839 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743864 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743887 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743910 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743932 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743953 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743978 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744000 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744024 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744054 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744097 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744119 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744142 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744163 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744186 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744208 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744230 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744254 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744275 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744297 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744320 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744346 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744369 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744391 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744414 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744435 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744457 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744478 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744500 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744523 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744544 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744564 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744588 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744612 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744654 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744675 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744696 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744719 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744765 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744825 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744858 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744889 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744924 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744955 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744987 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745012 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745038 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745084 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745106 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745130 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745153 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745176 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745203 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745226 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745250 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745272 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745296 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745319 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745342 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745363 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745387 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745417 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745443 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745466 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745489 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745512 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.746518 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.746631 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.746668 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.746702 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.746739 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.746775 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.746829 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.746866 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.746914 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.746949 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.746983 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747016 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747054 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747087 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747119 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747156 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747190 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747221 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747254 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747290 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747327 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747363 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747396 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747430 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747465 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747501 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747552 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747578 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747603 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747629 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747655 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747682 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747705 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747727 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747751 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747776 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747844 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747878 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747909 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747936 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.747968 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748005 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748053 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748090 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748129 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748162 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748195 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748230 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748265 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748298 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748385 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748412 4775 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748435 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748458 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748476 4775 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748496 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748516 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748536 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748602 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748617 4775 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748629 4775 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748643 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748659 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748673 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748687 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748701 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748714 4775 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.748729 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.749319 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.750505 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.762260 4775 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.771258 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.772820 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.773162 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.777313 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.742880 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743050 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743404 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743570 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743710 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.743868 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744014 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744181 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744311 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744438 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745712 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.745927 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.793443 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.746191 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.746291 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.744796 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.749026 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.749157 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.749391 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.749416 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.749438 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.749493 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.749585 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.749792 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.749880 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.749926 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.749983 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.750010 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.750288 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.752571 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.752763 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.753370 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.753391 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.753614 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.754252 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.757621 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.766278 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.766329 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.766427 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.766563 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.766693 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.767083 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.767292 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.767407 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.767529 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.767594 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.767787 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.768326 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.768631 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.768725 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.769327 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.769350 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.769898 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.769950 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.769964 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: E0123 14:04:31.771594 4775 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 14:04:31 crc kubenswrapper[4775]: E0123 14:04:31.773075 4775 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.779347 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.779703 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.781110 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.782107 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.782259 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.782432 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.783108 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.783294 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.783368 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.783619 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.783731 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.783769 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.783923 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.783990 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.784335 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.784400 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.784495 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.784627 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.785055 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.785059 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.785304 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.785548 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.785550 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.785605 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.785882 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.786403 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.786656 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.786783 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.786810 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.786844 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.787146 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.787187 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: E0123 14:04:31.787244 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.787419 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: E0123 14:04:31.787529 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.788677 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.788984 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.790662 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.791189 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.791312 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.791368 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.791659 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.791657 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.792106 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.792184 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.792669 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.792571 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.793472 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.793605 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.793869 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: E0123 14:04:31.793939 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:04:32.293913075 +0000 UTC m=+19.288741815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.794635 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.793580 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.794820 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.794844 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: E0123 14:04:31.795003 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 14:04:31 crc kubenswrapper[4775]: E0123 14:04:31.795024 4775 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:31 crc kubenswrapper[4775]: E0123 14:04:31.795107 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 14:04:32.295081036 +0000 UTC m=+19.289909776 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 14:04:31 crc kubenswrapper[4775]: E0123 14:04:31.795136 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 14:04:32.295128447 +0000 UTC m=+19.289957187 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.793865 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.794285 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.794308 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: E0123 14:04:31.795189 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 14:04:31 crc kubenswrapper[4775]: E0123 14:04:31.795224 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 14:04:32.29521814 +0000 UTC m=+19.290046880 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:31 crc kubenswrapper[4775]: E0123 14:04:31.795660 4775 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:31 crc kubenswrapper[4775]: E0123 14:04:31.795726 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 14:04:32.295710183 +0000 UTC m=+19.290538933 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.795829 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.796035 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.796286 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.796357 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.796866 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.796980 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.797038 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.797058 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.797176 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.797260 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.797387 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.797400 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.797431 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.797927 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.798438 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.798645 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.799411 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.799510 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.799656 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.802296 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.802986 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.803419 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.805349 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.805453 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.805970 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.806174 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.806264 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.807148 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.807297 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.808241 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.808347 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.808449 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.808516 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.808554 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.808594 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.809035 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.809041 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.809081 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.809112 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.809394 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.809411 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.809481 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.809577 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.809772 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.810276 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.810380 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.810898 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.811000 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.811167 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.811170 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.811181 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.811282 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.811376 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.811511 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.811737 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.811972 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.812239 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.812613 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.812619 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.812865 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.813208 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.813413 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.814419 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.816101 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.816954 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.817365 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.818593 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.818946 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.819038 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.819140 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.821595 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.822160 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.824500 4775 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2" exitCode=255 Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.824617 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2"} Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.831330 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.835764 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.836547 4775 scope.go:117] "RemoveContainer" containerID="f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.839536 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.839788 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.841137 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854100 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854165 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854243 4775 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854257 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854265 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854274 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854283 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854292 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854300 4775 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854308 4775 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854317 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854325 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854333 4775 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854343 4775 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854353 4775 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854361 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854369 4775 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854377 4775 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854386 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854396 4775 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854404 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854414 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854423 4775 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854432 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854441 4775 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854449 4775 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854457 4775 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854466 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854475 4775 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854484 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854469 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854492 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854590 4775 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854607 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854622 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854638 4775 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854652 4775 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854623 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854694 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854727 4775 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854743 4775 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854757 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854770 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854782 4775 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854795 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854832 4775 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854844 4775 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854856 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854868 4775 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854880 4775 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854892 4775 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854904 4775 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854916 4775 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854927 4775 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854939 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854950 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854961 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854973 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854983 4775 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.854995 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855006 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855017 4775 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855030 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855044 4775 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855060 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855076 4775 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855090 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855122 4775 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855134 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855145 4775 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855157 4775 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855168 4775 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855180 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855191 4775 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855203 4775 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855213 4775 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855225 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855236 4775 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855252 4775 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855264 4775 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855275 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855289 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855305 4775 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855317 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855336 4775 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855347 4775 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855359 4775 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855371 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855382 4775 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855393 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855404 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855414 4775 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855426 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855436 4775 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855447 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855460 4775 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855471 4775 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855482 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855493 4775 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855503 4775 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855513 4775 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855526 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855537 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855548 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855559 4775 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855570 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855582 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855593 4775 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855633 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855645 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855658 4775 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855677 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855688 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855699 4775 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855709 4775 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855720 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855731 4775 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855743 4775 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855754 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855764 4775 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855776 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855787 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855816 4775 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855832 4775 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855842 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855853 4775 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855865 4775 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855877 4775 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855888 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855898 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855909 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855920 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855930 4775 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855942 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855953 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855966 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855978 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.855989 4775 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856001 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856012 4775 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856023 4775 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856037 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856053 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856069 4775 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856081 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856093 4775 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856105 4775 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856116 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856126 4775 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856137 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856148 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856158 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856170 4775 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856181 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856198 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856212 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856222 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856233 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856245 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856256 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856268 4775 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856279 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856291 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856301 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856313 4775 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856324 4775 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856335 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856348 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856395 4775 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856409 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856420 4775 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856431 4775 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856443 4775 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856453 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856465 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856475 4775 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856486 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856498 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856510 4775 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.856522 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.859272 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.873553 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.882425 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.892999 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.901495 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.909427 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.918253 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.929427 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.936074 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.965331 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.972277 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 23 14:04:31 crc kubenswrapper[4775]: I0123 14:04:31.979923 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 23 14:04:31 crc kubenswrapper[4775]: W0123 14:04:31.986202 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-26ee4b029a215201a63b86a15db85debf05093f30d7e1fcf274eabd36563edf4 WatchSource:0}: Error finding container 26ee4b029a215201a63b86a15db85debf05093f30d7e1fcf274eabd36563edf4: Status 404 returned error can't find the container with id 26ee4b029a215201a63b86a15db85debf05093f30d7e1fcf274eabd36563edf4 Jan 23 14:04:31 crc kubenswrapper[4775]: W0123 14:04:31.998111 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-7c36dd033d87222d0fe0f7c987e33d9427b14f64f1cf8383c3a518b259684a5e WatchSource:0}: Error finding container 7c36dd033d87222d0fe0f7c987e33d9427b14f64f1cf8383c3a518b259684a5e: Status 404 returned error can't find the container with id 7c36dd033d87222d0fe0f7c987e33d9427b14f64f1cf8383c3a518b259684a5e Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.359716 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.359935 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.360058 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:04:32 crc kubenswrapper[4775]: E0123 14:04:32.360167 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:04:33.360068675 +0000 UTC m=+20.354897455 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:04:32 crc kubenswrapper[4775]: E0123 14:04:32.360284 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.360289 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:32 crc kubenswrapper[4775]: E0123 14:04:32.360316 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 14:04:32 crc kubenswrapper[4775]: E0123 14:04:32.360321 4775 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.360395 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:04:32 crc kubenswrapper[4775]: E0123 14:04:32.360457 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 14:04:33.360402463 +0000 UTC m=+20.355231283 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 14:04:32 crc kubenswrapper[4775]: E0123 14:04:32.360506 4775 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 14:04:32 crc kubenswrapper[4775]: E0123 14:04:32.360565 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 14:04:33.360547557 +0000 UTC m=+20.355376337 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 14:04:32 crc kubenswrapper[4775]: E0123 14:04:32.360335 4775 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:32 crc kubenswrapper[4775]: E0123 14:04:32.360719 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 14:04:33.360699111 +0000 UTC m=+20.355527891 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:32 crc kubenswrapper[4775]: E0123 14:04:32.360865 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 14:04:32 crc kubenswrapper[4775]: E0123 14:04:32.360904 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 14:04:32 crc kubenswrapper[4775]: E0123 14:04:32.360928 4775 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:32 crc kubenswrapper[4775]: E0123 14:04:32.361005 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 14:04:33.360982509 +0000 UTC m=+20.355811289 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.668031 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 23:14:33.59790214 +0000 UTC Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.829669 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9"} Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.829771 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"8c5c7aff9c6468ed51eedb93fdc9e478ee18be1bd2ef3a7a1d34fd661af8fba7"} Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.837471 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde"} Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.837578 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573"} Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.837610 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"26ee4b029a215201a63b86a15db85debf05093f30d7e1fcf274eabd36563edf4"} Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.845121 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.847138 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c"} Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.848448 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.849939 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"7c36dd033d87222d0fe0f7c987e33d9427b14f64f1cf8383c3a518b259684a5e"} Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.866057 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.879071 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.890397 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.901419 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.914081 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.929272 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.948703 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:32 crc kubenswrapper[4775]: I0123 14:04:32.977594 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.003747 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.030027 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.045908 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.057932 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.071371 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.084985 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.096789 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.112760 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.369501 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.369624 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.369675 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:04:33 crc kubenswrapper[4775]: E0123 14:04:33.369747 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:04:35.369687816 +0000 UTC m=+22.364516626 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:04:33 crc kubenswrapper[4775]: E0123 14:04:33.369836 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 14:04:33 crc kubenswrapper[4775]: E0123 14:04:33.369840 4775 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.369867 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:33 crc kubenswrapper[4775]: E0123 14:04:33.369960 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 14:04:35.369932913 +0000 UTC m=+22.364761743 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 14:04:33 crc kubenswrapper[4775]: E0123 14:04:33.369990 4775 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.370009 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:04:33 crc kubenswrapper[4775]: E0123 14:04:33.369863 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 14:04:33 crc kubenswrapper[4775]: E0123 14:04:33.370106 4775 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:33 crc kubenswrapper[4775]: E0123 14:04:33.370073 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 14:04:35.370056726 +0000 UTC m=+22.364885556 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 14:04:33 crc kubenswrapper[4775]: E0123 14:04:33.370115 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 14:04:33 crc kubenswrapper[4775]: E0123 14:04:33.370136 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 14:04:33 crc kubenswrapper[4775]: E0123 14:04:33.370145 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 14:04:35.370135918 +0000 UTC m=+22.364964668 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:33 crc kubenswrapper[4775]: E0123 14:04:33.370150 4775 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:33 crc kubenswrapper[4775]: E0123 14:04:33.370200 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 14:04:35.37018902 +0000 UTC m=+22.365017850 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.668790 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 13:40:16.186463366 +0000 UTC Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.713742 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.713790 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:04:33 crc kubenswrapper[4775]: E0123 14:04:33.713915 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.714007 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:33 crc kubenswrapper[4775]: E0123 14:04:33.714166 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:04:33 crc kubenswrapper[4775]: E0123 14:04:33.714275 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.722247 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.724337 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.726979 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.727199 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.728841 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.731442 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.732765 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.734513 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.735721 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.736498 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.737657 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.738290 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.739508 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.740038 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.740529 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.741441 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.741964 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.742854 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.743219 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.743746 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.744810 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.745370 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.746480 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.746946 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.748090 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.748583 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.749372 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.750549 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.751018 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.751942 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.752390 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.753403 4775 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.753395 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.753683 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.755297 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.756426 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.756854 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.758589 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.759269 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.760198 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.760998 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.762033 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.762485 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.763627 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.764397 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.765439 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.765964 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.766918 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.767476 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.768873 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.769073 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.769407 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.770354 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.770866 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.771764 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.772357 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.772826 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.782125 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.797741 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.814097 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.829761 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.845468 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.983059 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.987208 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.991746 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 23 14:04:33 crc kubenswrapper[4775]: I0123 14:04:33.998150 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.014725 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.032086 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.052623 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.071577 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.083609 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.096574 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.110990 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.126574 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.149907 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.170235 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.184436 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.201511 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.218066 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.243606 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.262132 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.280514 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.669390 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 18:22:40.193282639 +0000 UTC Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.725882 4775 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.728149 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.728225 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.728250 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.728348 4775 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.735215 4775 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.735534 4775 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.736786 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.736863 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.736883 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.736905 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.736921 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:34Z","lastTransitionTime":"2026-01-23T14:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:34 crc kubenswrapper[4775]: E0123 14:04:34.759215 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.763712 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.763772 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.763790 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.763838 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.763856 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:34Z","lastTransitionTime":"2026-01-23T14:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:34 crc kubenswrapper[4775]: E0123 14:04:34.783210 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.786987 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.787037 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.787052 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.787076 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.787095 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:34Z","lastTransitionTime":"2026-01-23T14:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:34 crc kubenswrapper[4775]: E0123 14:04:34.812626 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.817688 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.817753 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.817769 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.817790 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.817843 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:34Z","lastTransitionTime":"2026-01-23T14:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:34 crc kubenswrapper[4775]: E0123 14:04:34.831161 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.835247 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.835304 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.835322 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.835344 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.835360 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:34Z","lastTransitionTime":"2026-01-23T14:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:34 crc kubenswrapper[4775]: E0123 14:04:34.884763 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:34 crc kubenswrapper[4775]: E0123 14:04:34.884948 4775 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.886778 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.886847 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.886865 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.886886 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.886901 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:34Z","lastTransitionTime":"2026-01-23T14:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.989566 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.989605 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.989614 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.989630 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:34 crc kubenswrapper[4775]: I0123 14:04:34.989641 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:34Z","lastTransitionTime":"2026-01-23T14:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.092391 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.092446 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.092463 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.092489 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.092526 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:35Z","lastTransitionTime":"2026-01-23T14:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.195035 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.195112 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.195136 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.195164 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.195186 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:35Z","lastTransitionTime":"2026-01-23T14:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.298531 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.298584 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.298601 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.298627 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.298644 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:35Z","lastTransitionTime":"2026-01-23T14:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.387185 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.387286 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.387330 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.387396 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:04:35 crc kubenswrapper[4775]: E0123 14:04:35.387449 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:04:39.387413519 +0000 UTC m=+26.382242289 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.387520 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:35 crc kubenswrapper[4775]: E0123 14:04:35.387540 4775 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 14:04:35 crc kubenswrapper[4775]: E0123 14:04:35.387619 4775 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 14:04:35 crc kubenswrapper[4775]: E0123 14:04:35.387635 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 14:04:39.387612284 +0000 UTC m=+26.382441054 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 14:04:35 crc kubenswrapper[4775]: E0123 14:04:35.387548 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 14:04:35 crc kubenswrapper[4775]: E0123 14:04:35.387733 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 14:04:35 crc kubenswrapper[4775]: E0123 14:04:35.387767 4775 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:35 crc kubenswrapper[4775]: E0123 14:04:35.387677 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 14:04:39.387663825 +0000 UTC m=+26.382492605 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 14:04:35 crc kubenswrapper[4775]: E0123 14:04:35.387910 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 14:04:39.387882341 +0000 UTC m=+26.382711151 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:35 crc kubenswrapper[4775]: E0123 14:04:35.388020 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 14:04:35 crc kubenswrapper[4775]: E0123 14:04:35.388046 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 14:04:35 crc kubenswrapper[4775]: E0123 14:04:35.388067 4775 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:35 crc kubenswrapper[4775]: E0123 14:04:35.388132 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 14:04:39.388112157 +0000 UTC m=+26.382940957 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.401965 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.402024 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.402046 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.402076 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.402098 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:35Z","lastTransitionTime":"2026-01-23T14:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.505727 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.505791 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.505854 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.505900 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.505924 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:35Z","lastTransitionTime":"2026-01-23T14:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.609308 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.609365 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.609384 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.609409 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.609427 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:35Z","lastTransitionTime":"2026-01-23T14:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.670379 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 16:31:58.188000595 +0000 UTC Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.712083 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.712122 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.712133 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.712148 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.712159 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:35Z","lastTransitionTime":"2026-01-23T14:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.713515 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.713526 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:04:35 crc kubenswrapper[4775]: E0123 14:04:35.713705 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.713935 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:35 crc kubenswrapper[4775]: E0123 14:04:35.714109 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:04:35 crc kubenswrapper[4775]: E0123 14:04:35.714307 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.814597 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.814650 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.814669 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.814693 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.814710 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:35Z","lastTransitionTime":"2026-01-23T14:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.864228 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4"} Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.885362 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:35Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.902812 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:35Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.947137 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:35Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.948837 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.948871 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.948881 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.948895 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.948904 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:35Z","lastTransitionTime":"2026-01-23T14:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.957975 4775 csr.go:261] certificate signing request csr-hz74m is approved, waiting to be issued Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.977379 4775 csr.go:257] certificate signing request csr-hz74m is issued Jan 23 14:04:35 crc kubenswrapper[4775]: I0123 14:04:35.983042 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:35Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.006540 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.008078 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-kv8zk"] Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.008337 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-kv8zk" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.010238 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.010323 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.011223 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.028377 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.048088 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.051130 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.051158 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.051166 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.051178 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.051187 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:36Z","lastTransitionTime":"2026-01-23T14:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.086048 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.091666 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c6e25021-b268-4a6c-851d-43eb5504a3d2-hosts-file\") pod \"node-resolver-kv8zk\" (UID: \"c6e25021-b268-4a6c-851d-43eb5504a3d2\") " pod="openshift-dns/node-resolver-kv8zk" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.091722 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmxcw\" (UniqueName: \"kubernetes.io/projected/c6e25021-b268-4a6c-851d-43eb5504a3d2-kube-api-access-fmxcw\") pod \"node-resolver-kv8zk\" (UID: \"c6e25021-b268-4a6c-851d-43eb5504a3d2\") " pod="openshift-dns/node-resolver-kv8zk" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.104599 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.132793 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.153413 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.153445 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.153455 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.153471 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.153486 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:36Z","lastTransitionTime":"2026-01-23T14:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.166507 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.187565 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.192650 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c6e25021-b268-4a6c-851d-43eb5504a3d2-hosts-file\") pod \"node-resolver-kv8zk\" (UID: \"c6e25021-b268-4a6c-851d-43eb5504a3d2\") " pod="openshift-dns/node-resolver-kv8zk" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.192734 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmxcw\" (UniqueName: \"kubernetes.io/projected/c6e25021-b268-4a6c-851d-43eb5504a3d2-kube-api-access-fmxcw\") pod \"node-resolver-kv8zk\" (UID: \"c6e25021-b268-4a6c-851d-43eb5504a3d2\") " pod="openshift-dns/node-resolver-kv8zk" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.192825 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/c6e25021-b268-4a6c-851d-43eb5504a3d2-hosts-file\") pod \"node-resolver-kv8zk\" (UID: \"c6e25021-b268-4a6c-851d-43eb5504a3d2\") " pod="openshift-dns/node-resolver-kv8zk" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.200500 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.214166 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmxcw\" (UniqueName: \"kubernetes.io/projected/c6e25021-b268-4a6c-851d-43eb5504a3d2-kube-api-access-fmxcw\") pod \"node-resolver-kv8zk\" (UID: \"c6e25021-b268-4a6c-851d-43eb5504a3d2\") " pod="openshift-dns/node-resolver-kv8zk" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.215522 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.229748 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.243025 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.253648 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.255347 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.255391 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.255403 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.255420 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.255429 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:36Z","lastTransitionTime":"2026-01-23T14:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.263980 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.273093 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.321493 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-kv8zk" Jan 23 14:04:36 crc kubenswrapper[4775]: W0123 14:04:36.332749 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6e25021_b268_4a6c_851d_43eb5504a3d2.slice/crio-8a869a6f98e205e7ddb8e80b600864259e7faf3ae41f6a70ef78ed7edd879eab WatchSource:0}: Error finding container 8a869a6f98e205e7ddb8e80b600864259e7faf3ae41f6a70ef78ed7edd879eab: Status 404 returned error can't find the container with id 8a869a6f98e205e7ddb8e80b600864259e7faf3ae41f6a70ef78ed7edd879eab Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.360654 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.360683 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.360693 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.360706 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.360716 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:36Z","lastTransitionTime":"2026-01-23T14:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.463474 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.463522 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.463534 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.463555 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.463570 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:36Z","lastTransitionTime":"2026-01-23T14:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.508341 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-hpxpf"] Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.508691 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.509864 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-4q9qg"] Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.510461 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.511264 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.511278 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.511528 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.511963 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.512019 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.513940 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-8j5kp"] Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.514593 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.514995 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.515730 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.515753 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.516023 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.516232 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.516448 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.517096 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.540236 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.559725 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.566307 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.566348 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.566361 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.566380 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.566395 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:36Z","lastTransitionTime":"2026-01-23T14:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.574319 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.585424 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.595630 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3dd95cd2-5d8c-4e14-bc94-67bb80749037-system-cni-dir\") pod \"multus-additional-cni-plugins-8j5kp\" (UID: \"3dd95cd2-5d8c-4e14-bc94-67bb80749037\") " pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.595711 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3dd95cd2-5d8c-4e14-bc94-67bb80749037-cni-binary-copy\") pod \"multus-additional-cni-plugins-8j5kp\" (UID: \"3dd95cd2-5d8c-4e14-bc94-67bb80749037\") " pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.595740 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3dd95cd2-5d8c-4e14-bc94-67bb80749037-cnibin\") pod \"multus-additional-cni-plugins-8j5kp\" (UID: \"3dd95cd2-5d8c-4e14-bc94-67bb80749037\") " pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.595776 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3dd95cd2-5d8c-4e14-bc94-67bb80749037-os-release\") pod \"multus-additional-cni-plugins-8j5kp\" (UID: \"3dd95cd2-5d8c-4e14-bc94-67bb80749037\") " pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.595837 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3dd95cd2-5d8c-4e14-bc94-67bb80749037-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8j5kp\" (UID: \"3dd95cd2-5d8c-4e14-bc94-67bb80749037\") " pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.595869 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3dd95cd2-5d8c-4e14-bc94-67bb80749037-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8j5kp\" (UID: \"3dd95cd2-5d8c-4e14-bc94-67bb80749037\") " pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.598608 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.612229 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.629302 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.654638 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.669529 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.669566 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.669574 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.669589 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.669598 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:36Z","lastTransitionTime":"2026-01-23T14:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.670795 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 21:19:26.79075883 +0000 UTC Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.689362 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.696372 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4fea0767-0566-4214-855d-ed0373946271-mcd-auth-proxy-config\") pod \"machine-config-daemon-4q9qg\" (UID: \"4fea0767-0566-4214-855d-ed0373946271\") " pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.696478 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-cnibin\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.696504 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-multus-conf-dir\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.696573 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3dd95cd2-5d8c-4e14-bc94-67bb80749037-system-cni-dir\") pod \"multus-additional-cni-plugins-8j5kp\" (UID: \"3dd95cd2-5d8c-4e14-bc94-67bb80749037\") " pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.696632 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-os-release\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.696654 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-multus-socket-dir-parent\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.696777 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-host-run-k8s-cni-cncf-io\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.696720 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3dd95cd2-5d8c-4e14-bc94-67bb80749037-system-cni-dir\") pod \"multus-additional-cni-plugins-8j5kp\" (UID: \"3dd95cd2-5d8c-4e14-bc94-67bb80749037\") " pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.696855 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-etc-kubernetes\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.696925 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-host-run-multus-certs\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.696990 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-hostroot\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.697017 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-multus-cni-dir\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.697069 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-host-run-netns\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.697097 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9shl\" (UniqueName: \"kubernetes.io/projected/ba4447c0-bada-49eb-b6b4-b25feff627a9-kube-api-access-v9shl\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.697163 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-host-var-lib-cni-multus\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.697232 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4fea0767-0566-4214-855d-ed0373946271-rootfs\") pod \"machine-config-daemon-4q9qg\" (UID: \"4fea0767-0566-4214-855d-ed0373946271\") " pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.697297 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-system-cni-dir\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.697324 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ba4447c0-bada-49eb-b6b4-b25feff627a9-multus-daemon-config\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.697438 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3dd95cd2-5d8c-4e14-bc94-67bb80749037-cni-binary-copy\") pod \"multus-additional-cni-plugins-8j5kp\" (UID: \"3dd95cd2-5d8c-4e14-bc94-67bb80749037\") " pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.697463 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4fea0767-0566-4214-855d-ed0373946271-proxy-tls\") pod \"machine-config-daemon-4q9qg\" (UID: \"4fea0767-0566-4214-855d-ed0373946271\") " pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.698247 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3dd95cd2-5d8c-4e14-bc94-67bb80749037-cni-binary-copy\") pod \"multus-additional-cni-plugins-8j5kp\" (UID: \"3dd95cd2-5d8c-4e14-bc94-67bb80749037\") " pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.698328 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-host-var-lib-cni-bin\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.698420 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3dd95cd2-5d8c-4e14-bc94-67bb80749037-cnibin\") pod \"multus-additional-cni-plugins-8j5kp\" (UID: \"3dd95cd2-5d8c-4e14-bc94-67bb80749037\") " pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.698480 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3dd95cd2-5d8c-4e14-bc94-67bb80749037-os-release\") pod \"multus-additional-cni-plugins-8j5kp\" (UID: \"3dd95cd2-5d8c-4e14-bc94-67bb80749037\") " pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.698631 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gddb\" (UniqueName: \"kubernetes.io/projected/3dd95cd2-5d8c-4e14-bc94-67bb80749037-kube-api-access-6gddb\") pod \"multus-additional-cni-plugins-8j5kp\" (UID: \"3dd95cd2-5d8c-4e14-bc94-67bb80749037\") " pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.698531 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3dd95cd2-5d8c-4e14-bc94-67bb80749037-cnibin\") pod \"multus-additional-cni-plugins-8j5kp\" (UID: \"3dd95cd2-5d8c-4e14-bc94-67bb80749037\") " pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.698716 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbc24\" (UniqueName: \"kubernetes.io/projected/4fea0767-0566-4214-855d-ed0373946271-kube-api-access-tbc24\") pod \"machine-config-daemon-4q9qg\" (UID: \"4fea0767-0566-4214-855d-ed0373946271\") " pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.698741 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ba4447c0-bada-49eb-b6b4-b25feff627a9-cni-binary-copy\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.698855 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-host-var-lib-kubelet\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.698588 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3dd95cd2-5d8c-4e14-bc94-67bb80749037-os-release\") pod \"multus-additional-cni-plugins-8j5kp\" (UID: \"3dd95cd2-5d8c-4e14-bc94-67bb80749037\") " pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.698930 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3dd95cd2-5d8c-4e14-bc94-67bb80749037-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8j5kp\" (UID: \"3dd95cd2-5d8c-4e14-bc94-67bb80749037\") " pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.698956 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3dd95cd2-5d8c-4e14-bc94-67bb80749037-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8j5kp\" (UID: \"3dd95cd2-5d8c-4e14-bc94-67bb80749037\") " pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.699741 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3dd95cd2-5d8c-4e14-bc94-67bb80749037-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-8j5kp\" (UID: \"3dd95cd2-5d8c-4e14-bc94-67bb80749037\") " pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.700069 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3dd95cd2-5d8c-4e14-bc94-67bb80749037-tuning-conf-dir\") pod \"multus-additional-cni-plugins-8j5kp\" (UID: \"3dd95cd2-5d8c-4e14-bc94-67bb80749037\") " pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.715392 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.760013 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.771538 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.771579 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.771595 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.771616 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.771632 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:36Z","lastTransitionTime":"2026-01-23T14:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.784306 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.797180 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.800492 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-hostroot\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.800579 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-hostroot\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.800641 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-multus-cni-dir\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.800735 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-multus-cni-dir\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.800784 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-host-run-netns\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.800863 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-host-var-lib-cni-multus\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.800935 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-host-var-lib-cni-multus\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.800859 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-host-run-netns\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.800898 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9shl\" (UniqueName: \"kubernetes.io/projected/ba4447c0-bada-49eb-b6b4-b25feff627a9-kube-api-access-v9shl\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.801059 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4fea0767-0566-4214-855d-ed0373946271-rootfs\") pod \"machine-config-daemon-4q9qg\" (UID: \"4fea0767-0566-4214-855d-ed0373946271\") " pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.801142 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/4fea0767-0566-4214-855d-ed0373946271-rootfs\") pod \"machine-config-daemon-4q9qg\" (UID: \"4fea0767-0566-4214-855d-ed0373946271\") " pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.801093 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-system-cni-dir\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.801207 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ba4447c0-bada-49eb-b6b4-b25feff627a9-multus-daemon-config\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.801274 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-system-cni-dir\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.801349 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4fea0767-0566-4214-855d-ed0373946271-proxy-tls\") pod \"machine-config-daemon-4q9qg\" (UID: \"4fea0767-0566-4214-855d-ed0373946271\") " pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.801402 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-host-var-lib-cni-bin\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.801524 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-host-var-lib-cni-bin\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.802030 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ba4447c0-bada-49eb-b6b4-b25feff627a9-multus-daemon-config\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.802330 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gddb\" (UniqueName: \"kubernetes.io/projected/3dd95cd2-5d8c-4e14-bc94-67bb80749037-kube-api-access-6gddb\") pod \"multus-additional-cni-plugins-8j5kp\" (UID: \"3dd95cd2-5d8c-4e14-bc94-67bb80749037\") " pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.802358 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbc24\" (UniqueName: \"kubernetes.io/projected/4fea0767-0566-4214-855d-ed0373946271-kube-api-access-tbc24\") pod \"machine-config-daemon-4q9qg\" (UID: \"4fea0767-0566-4214-855d-ed0373946271\") " pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.802376 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ba4447c0-bada-49eb-b6b4-b25feff627a9-cni-binary-copy\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.802394 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-host-var-lib-kubelet\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.802423 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-cnibin\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.802439 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-multus-conf-dir\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.802455 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4fea0767-0566-4214-855d-ed0373946271-mcd-auth-proxy-config\") pod \"machine-config-daemon-4q9qg\" (UID: \"4fea0767-0566-4214-855d-ed0373946271\") " pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.802469 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-os-release\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.802485 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-etc-kubernetes\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.802502 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-multus-socket-dir-parent\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.802518 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-host-run-k8s-cni-cncf-io\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.802540 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-host-run-multus-certs\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.802588 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-host-run-multus-certs\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.802768 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-os-release\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.802815 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-etc-kubernetes\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.802850 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-multus-socket-dir-parent\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.802875 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-host-run-k8s-cni-cncf-io\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.802895 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-host-var-lib-kubelet\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.803368 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ba4447c0-bada-49eb-b6b4-b25feff627a9-cni-binary-copy\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.803380 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4fea0767-0566-4214-855d-ed0373946271-mcd-auth-proxy-config\") pod \"machine-config-daemon-4q9qg\" (UID: \"4fea0767-0566-4214-855d-ed0373946271\") " pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.803414 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-cnibin\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.803438 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ba4447c0-bada-49eb-b6b4-b25feff627a9-multus-conf-dir\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.807524 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4fea0767-0566-4214-855d-ed0373946271-proxy-tls\") pod \"machine-config-daemon-4q9qg\" (UID: \"4fea0767-0566-4214-855d-ed0373946271\") " pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.819533 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.825645 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbc24\" (UniqueName: \"kubernetes.io/projected/4fea0767-0566-4214-855d-ed0373946271-kube-api-access-tbc24\") pod \"machine-config-daemon-4q9qg\" (UID: \"4fea0767-0566-4214-855d-ed0373946271\") " pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.826301 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gddb\" (UniqueName: \"kubernetes.io/projected/3dd95cd2-5d8c-4e14-bc94-67bb80749037-kube-api-access-6gddb\") pod \"multus-additional-cni-plugins-8j5kp\" (UID: \"3dd95cd2-5d8c-4e14-bc94-67bb80749037\") " pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.828183 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9shl\" (UniqueName: \"kubernetes.io/projected/ba4447c0-bada-49eb-b6b4-b25feff627a9-kube-api-access-v9shl\") pod \"multus-hpxpf\" (UID: \"ba4447c0-bada-49eb-b6b4-b25feff627a9\") " pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.830939 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-hpxpf" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.842371 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.844369 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: W0123 14:04:36.853061 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fea0767_0566_4214_855d_ed0373946271.slice/crio-1f5a10de2515f742f1f553243cf07f9610692a56a2cc9d098bc9bd2cbbc29d26 WatchSource:0}: Error finding container 1f5a10de2515f742f1f553243cf07f9610692a56a2cc9d098bc9bd2cbbc29d26: Status 404 returned error can't find the container with id 1f5a10de2515f742f1f553243cf07f9610692a56a2cc9d098bc9bd2cbbc29d26 Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.858282 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.863533 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.867088 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hpxpf" event={"ID":"ba4447c0-bada-49eb-b6b4-b25feff627a9","Type":"ContainerStarted","Data":"bfd5db624b10f5d55f84c2d097f28815ba13871d7c58be819f1a0199a386f3e8"} Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.873425 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-kv8zk" event={"ID":"c6e25021-b268-4a6c-851d-43eb5504a3d2","Type":"ContainerStarted","Data":"a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f"} Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.873492 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-kv8zk" event={"ID":"c6e25021-b268-4a6c-851d-43eb5504a3d2","Type":"ContainerStarted","Data":"8a869a6f98e205e7ddb8e80b600864259e7faf3ae41f6a70ef78ed7edd879eab"} Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.878254 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.878304 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.878317 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.878531 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.878574 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:36Z","lastTransitionTime":"2026-01-23T14:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.880090 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerStarted","Data":"1f5a10de2515f742f1f553243cf07f9610692a56a2cc9d098bc9bd2cbbc29d26"} Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.887165 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.917395 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.930234 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qrvs8"] Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.931153 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.933977 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.934109 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.934144 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.934233 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.934263 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.934373 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.934433 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.943932 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.959355 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.973919 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.978740 4775 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-23 13:59:35 +0000 UTC, rotation deadline is 2026-10-15 06:36:12.568472625 +0000 UTC Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.978855 4775 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6352h31m35.58962191s for next certificate rotation Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.980852 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.980908 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.980921 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.980940 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.980962 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:36Z","lastTransitionTime":"2026-01-23T14:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:36 crc kubenswrapper[4775]: I0123 14:04:36.989717 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:36Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.003476 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-run-ovn-kubernetes\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.003508 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-env-overrides\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.003529 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-run-netns\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.003546 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-var-lib-openvswitch\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.003561 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-kubelet\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.003578 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-node-log\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.003603 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-run-systemd\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.003640 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-run-openvswitch\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.003664 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-ovn-node-metrics-cert\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.003698 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-ovnkube-config\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.003713 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-ovnkube-script-lib\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.003741 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-slash\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.003769 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-run-ovn\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.003786 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-log-socket\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.003814 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.003837 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-cni-bin\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.003866 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6jls\" (UniqueName: \"kubernetes.io/projected/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-kube-api-access-d6jls\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.003900 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-cni-netd\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.003913 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-systemd-units\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.003926 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-etc-openvswitch\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.011649 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.028646 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.044096 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.059212 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.074907 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.083466 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.083520 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.083532 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.083553 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.083569 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:37Z","lastTransitionTime":"2026-01-23T14:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.097045 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104290 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-cni-bin\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104326 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6jls\" (UniqueName: \"kubernetes.io/projected/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-kube-api-access-d6jls\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104350 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-cni-netd\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104365 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-systemd-units\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104362 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-cni-bin\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104379 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-etc-openvswitch\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104397 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-run-ovn-kubernetes\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104413 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-env-overrides\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104419 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-systemd-units\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104432 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-run-netns\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104441 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-run-ovn-kubernetes\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104453 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-var-lib-openvswitch\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104471 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-etc-openvswitch\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104414 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-cni-netd\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104509 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-kubelet\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104517 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-var-lib-openvswitch\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104499 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-run-netns\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104473 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-kubelet\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104656 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-node-log\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104734 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-run-systemd\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104791 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-run-openvswitch\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104863 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-ovn-node-metrics-cert\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104864 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-node-log\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104897 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-ovnkube-config\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104924 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-slash\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104944 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-ovnkube-script-lib\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104952 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-run-systemd\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104970 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-run-ovn\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104994 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-log-socket\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.105020 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.105113 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.105157 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-slash\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.104921 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-run-openvswitch\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.105441 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-ovnkube-config\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.105491 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-run-ovn\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.105518 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-log-socket\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.105626 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-env-overrides\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.105944 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-ovnkube-script-lib\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.109466 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-ovn-node-metrics-cert\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.112297 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.126859 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6jls\" (UniqueName: \"kubernetes.io/projected/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-kube-api-access-d6jls\") pod \"ovnkube-node-qrvs8\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.140774 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.156385 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.169293 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.180315 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.186235 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.186262 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.186270 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.186283 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.186293 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:37Z","lastTransitionTime":"2026-01-23T14:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.193418 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.205883 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.219090 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.243958 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:37 crc kubenswrapper[4775]: W0123 14:04:37.272912 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd5906e8_fa10_4ad1_b8c2_6bf9d00a9c06.slice/crio-c9b1bad48b28a1f69c2c2d6ac40d31127808a59f11181daf49f1fb5d9684dc62 WatchSource:0}: Error finding container c9b1bad48b28a1f69c2c2d6ac40d31127808a59f11181daf49f1fb5d9684dc62: Status 404 returned error can't find the container with id c9b1bad48b28a1f69c2c2d6ac40d31127808a59f11181daf49f1fb5d9684dc62 Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.294052 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.294097 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.294107 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.294123 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.294136 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:37Z","lastTransitionTime":"2026-01-23T14:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.296965 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.321576 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.397469 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.397501 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.397512 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.397527 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.397537 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:37Z","lastTransitionTime":"2026-01-23T14:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.499757 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.499819 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.499828 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.499843 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.499853 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:37Z","lastTransitionTime":"2026-01-23T14:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.602067 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.602110 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.602124 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.602145 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.602159 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:37Z","lastTransitionTime":"2026-01-23T14:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.671951 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 09:47:04.141968643 +0000 UTC Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.706124 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.706823 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.706839 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.706864 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.706880 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:37Z","lastTransitionTime":"2026-01-23T14:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.713634 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.713710 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.713638 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:04:37 crc kubenswrapper[4775]: E0123 14:04:37.713843 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:04:37 crc kubenswrapper[4775]: E0123 14:04:37.713999 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:04:37 crc kubenswrapper[4775]: E0123 14:04:37.714236 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.809307 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.809370 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.809390 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.809416 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.809433 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:37Z","lastTransitionTime":"2026-01-23T14:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.888873 4775 generic.go:334] "Generic (PLEG): container finished" podID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerID="684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40" exitCode=0 Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.888983 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerDied","Data":"684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40"} Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.889062 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerStarted","Data":"c9b1bad48b28a1f69c2c2d6ac40d31127808a59f11181daf49f1fb5d9684dc62"} Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.894712 4775 generic.go:334] "Generic (PLEG): container finished" podID="3dd95cd2-5d8c-4e14-bc94-67bb80749037" containerID="7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083" exitCode=0 Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.894827 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" event={"ID":"3dd95cd2-5d8c-4e14-bc94-67bb80749037","Type":"ContainerDied","Data":"7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083"} Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.894861 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" event={"ID":"3dd95cd2-5d8c-4e14-bc94-67bb80749037","Type":"ContainerStarted","Data":"ff86fb0136c263f06815ca9405f4979fa529e7b493f52b56a3db5760f1f5fb00"} Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.897895 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerStarted","Data":"294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529"} Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.897937 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerStarted","Data":"69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d"} Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.900646 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hpxpf" event={"ID":"ba4447c0-bada-49eb-b6b4-b25feff627a9","Type":"ContainerStarted","Data":"d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec"} Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.906293 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.914439 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.914466 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.914475 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.914490 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.914502 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:37Z","lastTransitionTime":"2026-01-23T14:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.918727 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.931256 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.947474 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.960116 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.972981 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.983411 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:37 crc kubenswrapper[4775]: I0123 14:04:37.996847 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.010366 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.017133 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.017198 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.017209 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.017230 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.017242 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:38Z","lastTransitionTime":"2026-01-23T14:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.024278 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.035640 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.047530 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.065419 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.076495 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.089158 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.104040 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.116298 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.119207 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.119233 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.119242 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.119261 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.119274 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:38Z","lastTransitionTime":"2026-01-23T14:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.137243 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.157950 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.171645 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.180850 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.192356 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.204058 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.216332 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.223105 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.223140 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.223153 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.223170 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.223183 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:38Z","lastTransitionTime":"2026-01-23T14:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.228147 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.240886 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.263202 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.273036 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.325964 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.326203 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.326308 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.326396 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.326475 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:38Z","lastTransitionTime":"2026-01-23T14:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.428209 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.428439 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.428526 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.428649 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.428743 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:38Z","lastTransitionTime":"2026-01-23T14:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.531642 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.531964 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.531976 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.531991 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.532002 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:38Z","lastTransitionTime":"2026-01-23T14:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.634663 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.634711 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.634728 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.634747 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.634759 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:38Z","lastTransitionTime":"2026-01-23T14:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.672783 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 04:46:27.17446305 +0000 UTC Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.739081 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.739130 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.739144 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.739164 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.739178 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:38Z","lastTransitionTime":"2026-01-23T14:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.805322 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-dwmhf"] Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.805739 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-dwmhf" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.807710 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.807951 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.808173 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.810010 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.819905 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.822654 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5473290b-b658-4193-9287-af63cfc2a1c9-serviceca\") pod \"node-ca-dwmhf\" (UID: \"5473290b-b658-4193-9287-af63cfc2a1c9\") " pod="openshift-image-registry/node-ca-dwmhf" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.822717 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtgsg\" (UniqueName: \"kubernetes.io/projected/5473290b-b658-4193-9287-af63cfc2a1c9-kube-api-access-qtgsg\") pod \"node-ca-dwmhf\" (UID: \"5473290b-b658-4193-9287-af63cfc2a1c9\") " pod="openshift-image-registry/node-ca-dwmhf" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.822751 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5473290b-b658-4193-9287-af63cfc2a1c9-host\") pod \"node-ca-dwmhf\" (UID: \"5473290b-b658-4193-9287-af63cfc2a1c9\") " pod="openshift-image-registry/node-ca-dwmhf" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.834220 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.842308 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.842363 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.842375 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.842393 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.842406 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:38Z","lastTransitionTime":"2026-01-23T14:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.850043 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.870739 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.887137 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.905678 4775 generic.go:334] "Generic (PLEG): container finished" podID="3dd95cd2-5d8c-4e14-bc94-67bb80749037" containerID="5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0" exitCode=0 Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.905782 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" event={"ID":"3dd95cd2-5d8c-4e14-bc94-67bb80749037","Type":"ContainerDied","Data":"5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0"} Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.910182 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerStarted","Data":"dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c"} Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.910223 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerStarted","Data":"a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316"} Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.910237 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerStarted","Data":"efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14"} Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.910250 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerStarted","Data":"8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6"} Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.910262 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerStarted","Data":"209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028"} Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.910274 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerStarted","Data":"1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a"} Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.913541 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.923517 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5473290b-b658-4193-9287-af63cfc2a1c9-serviceca\") pod \"node-ca-dwmhf\" (UID: \"5473290b-b658-4193-9287-af63cfc2a1c9\") " pod="openshift-image-registry/node-ca-dwmhf" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.923629 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtgsg\" (UniqueName: \"kubernetes.io/projected/5473290b-b658-4193-9287-af63cfc2a1c9-kube-api-access-qtgsg\") pod \"node-ca-dwmhf\" (UID: \"5473290b-b658-4193-9287-af63cfc2a1c9\") " pod="openshift-image-registry/node-ca-dwmhf" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.923752 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5473290b-b658-4193-9287-af63cfc2a1c9-host\") pod \"node-ca-dwmhf\" (UID: \"5473290b-b658-4193-9287-af63cfc2a1c9\") " pod="openshift-image-registry/node-ca-dwmhf" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.923838 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5473290b-b658-4193-9287-af63cfc2a1c9-host\") pod \"node-ca-dwmhf\" (UID: \"5473290b-b658-4193-9287-af63cfc2a1c9\") " pod="openshift-image-registry/node-ca-dwmhf" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.924581 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5473290b-b658-4193-9287-af63cfc2a1c9-serviceca\") pod \"node-ca-dwmhf\" (UID: \"5473290b-b658-4193-9287-af63cfc2a1c9\") " pod="openshift-image-registry/node-ca-dwmhf" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.932278 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.944876 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.944932 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.944951 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.944978 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.944996 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:38Z","lastTransitionTime":"2026-01-23T14:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.946382 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.953074 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtgsg\" (UniqueName: \"kubernetes.io/projected/5473290b-b658-4193-9287-af63cfc2a1c9-kube-api-access-qtgsg\") pod \"node-ca-dwmhf\" (UID: \"5473290b-b658-4193-9287-af63cfc2a1c9\") " pod="openshift-image-registry/node-ca-dwmhf" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.961305 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.974993 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:38 crc kubenswrapper[4775]: I0123 14:04:38.986626 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.000325 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:38Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.018363 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:39Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.031100 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:39Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.041565 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:39Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.047747 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.047785 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.047795 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.047823 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.047832 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:39Z","lastTransitionTime":"2026-01-23T14:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.056292 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:39Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.067091 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:39Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.079622 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:39Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.102230 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:39Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.114417 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:39Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.122953 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:39Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.128914 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-dwmhf" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.134782 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:39Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.147043 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:39Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.150934 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.150960 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.150970 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.150989 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.151000 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:39Z","lastTransitionTime":"2026-01-23T14:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:39 crc kubenswrapper[4775]: W0123 14:04:39.155494 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5473290b_b658_4193_9287_af63cfc2a1c9.slice/crio-55dd733e6d8c138e872289f4dcefedfaf7b5ac2253edf1a530f086da69216502 WatchSource:0}: Error finding container 55dd733e6d8c138e872289f4dcefedfaf7b5ac2253edf1a530f086da69216502: Status 404 returned error can't find the container with id 55dd733e6d8c138e872289f4dcefedfaf7b5ac2253edf1a530f086da69216502 Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.161181 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:39Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.180252 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:39Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.192199 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:39Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.208200 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:39Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.232265 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:39Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.251793 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:39Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.255581 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.255813 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.255888 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.255957 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.256019 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:39Z","lastTransitionTime":"2026-01-23T14:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.282097 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:39Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.358697 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.358738 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.358752 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.358769 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.358780 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:39Z","lastTransitionTime":"2026-01-23T14:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.427594 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:04:39 crc kubenswrapper[4775]: E0123 14:04:39.427692 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:04:47.427671752 +0000 UTC m=+34.422500502 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.427737 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.427781 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.427825 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.427860 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:39 crc kubenswrapper[4775]: E0123 14:04:39.427918 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 14:04:39 crc kubenswrapper[4775]: E0123 14:04:39.427936 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 14:04:39 crc kubenswrapper[4775]: E0123 14:04:39.427935 4775 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 14:04:39 crc kubenswrapper[4775]: E0123 14:04:39.427948 4775 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:39 crc kubenswrapper[4775]: E0123 14:04:39.427988 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 14:04:47.42798118 +0000 UTC m=+34.422809920 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:39 crc kubenswrapper[4775]: E0123 14:04:39.428002 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 14:04:47.42799672 +0000 UTC m=+34.422825460 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 14:04:39 crc kubenswrapper[4775]: E0123 14:04:39.428013 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 14:04:39 crc kubenswrapper[4775]: E0123 14:04:39.428035 4775 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 14:04:39 crc kubenswrapper[4775]: E0123 14:04:39.428133 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 14:04:47.428112223 +0000 UTC m=+34.422940973 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 14:04:39 crc kubenswrapper[4775]: E0123 14:04:39.428051 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 14:04:39 crc kubenswrapper[4775]: E0123 14:04:39.428170 4775 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:39 crc kubenswrapper[4775]: E0123 14:04:39.428212 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 14:04:47.428204066 +0000 UTC m=+34.423032826 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.461298 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.461342 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.461350 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.461364 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.461373 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:39Z","lastTransitionTime":"2026-01-23T14:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.563341 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.563382 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.563391 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.563405 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.563414 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:39Z","lastTransitionTime":"2026-01-23T14:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.665975 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.666033 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.666054 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.666078 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.666096 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:39Z","lastTransitionTime":"2026-01-23T14:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.673746 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 03:10:46.819302227 +0000 UTC Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.713354 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.713401 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.713456 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:04:39 crc kubenswrapper[4775]: E0123 14:04:39.713515 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:04:39 crc kubenswrapper[4775]: E0123 14:04:39.713617 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:04:39 crc kubenswrapper[4775]: E0123 14:04:39.713707 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.769005 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.769060 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.769076 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.769101 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.769119 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:39Z","lastTransitionTime":"2026-01-23T14:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.873462 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.873682 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.873764 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.873898 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.874141 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:39Z","lastTransitionTime":"2026-01-23T14:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.916786 4775 generic.go:334] "Generic (PLEG): container finished" podID="3dd95cd2-5d8c-4e14-bc94-67bb80749037" containerID="c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa" exitCode=0 Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.916859 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" event={"ID":"3dd95cd2-5d8c-4e14-bc94-67bb80749037","Type":"ContainerDied","Data":"c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa"} Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.919485 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-dwmhf" event={"ID":"5473290b-b658-4193-9287-af63cfc2a1c9","Type":"ContainerStarted","Data":"5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b"} Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.919560 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-dwmhf" event={"ID":"5473290b-b658-4193-9287-af63cfc2a1c9","Type":"ContainerStarted","Data":"55dd733e6d8c138e872289f4dcefedfaf7b5ac2253edf1a530f086da69216502"} Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.948339 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:39Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.965054 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:39Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.977142 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.977197 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.977217 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.977244 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.977265 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:39Z","lastTransitionTime":"2026-01-23T14:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:39 crc kubenswrapper[4775]: I0123 14:04:39.985083 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:39Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.017892 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.030442 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.048106 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.063330 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.079437 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.079493 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.079512 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.079537 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.079556 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:40Z","lastTransitionTime":"2026-01-23T14:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.082774 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.100116 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.118662 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.137968 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.153323 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.174998 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.181479 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.181550 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.181563 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.181580 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.181591 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:40Z","lastTransitionTime":"2026-01-23T14:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.186390 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.202066 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.215103 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.226263 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.242870 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.254541 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.267697 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.280651 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.283853 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.283881 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.283892 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.283907 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.283917 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:40Z","lastTransitionTime":"2026-01-23T14:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.289891 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.302358 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.315567 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.326586 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.343976 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.365796 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.378910 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.385983 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.386018 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.386027 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.386045 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.386056 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:40Z","lastTransitionTime":"2026-01-23T14:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.390558 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.406275 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.488421 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.488502 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.488520 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.488548 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.488566 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:40Z","lastTransitionTime":"2026-01-23T14:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.592327 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.592446 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.592479 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.592518 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.592545 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:40Z","lastTransitionTime":"2026-01-23T14:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.674507 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 16:47:32.171102222 +0000 UTC Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.696560 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.696628 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.696645 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.696671 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.696693 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:40Z","lastTransitionTime":"2026-01-23T14:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.800155 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.800228 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.800258 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.800290 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.800315 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:40Z","lastTransitionTime":"2026-01-23T14:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.945469 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.945528 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.945546 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.945574 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.945593 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:40Z","lastTransitionTime":"2026-01-23T14:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.952243 4775 generic.go:334] "Generic (PLEG): container finished" podID="3dd95cd2-5d8c-4e14-bc94-67bb80749037" containerID="41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a" exitCode=0 Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.952336 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" event={"ID":"3dd95cd2-5d8c-4e14-bc94-67bb80749037","Type":"ContainerDied","Data":"41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a"} Jan 23 14:04:40 crc kubenswrapper[4775]: I0123 14:04:40.976933 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:40Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.008735 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:41Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.032785 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:41Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.049678 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.049728 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.049748 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.049770 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.049787 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:41Z","lastTransitionTime":"2026-01-23T14:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.052596 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:41Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.069224 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:41Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.087505 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:41Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.098322 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:41Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.117543 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:41Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.136274 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:41Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.154031 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:41Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.157651 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.157684 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.157694 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.157708 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.157718 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:41Z","lastTransitionTime":"2026-01-23T14:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.168702 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:41Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.184702 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:41Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.197141 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:41Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.208788 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:41Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.236926 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:41Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.260238 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.260288 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.260299 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.260320 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.260333 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:41Z","lastTransitionTime":"2026-01-23T14:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.362699 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.362733 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.362744 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.362758 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.362767 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:41Z","lastTransitionTime":"2026-01-23T14:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.465712 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.465773 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.465795 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.466030 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.466049 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:41Z","lastTransitionTime":"2026-01-23T14:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.569014 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.569061 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.569071 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.569090 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.569103 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:41Z","lastTransitionTime":"2026-01-23T14:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.671554 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.671922 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.671931 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.671946 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.671958 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:41Z","lastTransitionTime":"2026-01-23T14:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.674773 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 21:17:25.205481822 +0000 UTC Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.713065 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.713119 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.713158 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:04:41 crc kubenswrapper[4775]: E0123 14:04:41.713272 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:04:41 crc kubenswrapper[4775]: E0123 14:04:41.713379 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:04:41 crc kubenswrapper[4775]: E0123 14:04:41.713581 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.777602 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.777630 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.777639 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.777654 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.777663 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:41Z","lastTransitionTime":"2026-01-23T14:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.884428 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.884482 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.884499 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.884525 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.884543 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:41Z","lastTransitionTime":"2026-01-23T14:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.960846 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerStarted","Data":"1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c"} Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.964628 4775 generic.go:334] "Generic (PLEG): container finished" podID="3dd95cd2-5d8c-4e14-bc94-67bb80749037" containerID="ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf" exitCode=0 Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.964677 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" event={"ID":"3dd95cd2-5d8c-4e14-bc94-67bb80749037","Type":"ContainerDied","Data":"ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf"} Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.983632 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:41Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.987704 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.987773 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.987794 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.987852 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:41 crc kubenswrapper[4775]: I0123 14:04:41.987881 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:41Z","lastTransitionTime":"2026-01-23T14:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.015979 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:42Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.033115 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:42Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.055393 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:42Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.077816 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:42Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.092961 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.093010 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.093026 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.093048 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.093064 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:42Z","lastTransitionTime":"2026-01-23T14:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.108252 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:42Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.122211 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:42Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.140116 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:42Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.156424 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:42Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.169505 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:42Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.185075 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:42Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.195965 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.195999 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.196009 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.196023 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.196036 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:42Z","lastTransitionTime":"2026-01-23T14:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.204423 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:42Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.216020 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:42Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.234047 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:42Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.246660 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:42Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.298276 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.298567 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.298647 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.298730 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.298825 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:42Z","lastTransitionTime":"2026-01-23T14:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.402527 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.402602 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.402620 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.402644 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.402661 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:42Z","lastTransitionTime":"2026-01-23T14:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.506283 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.506368 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.506386 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.506434 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.506452 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:42Z","lastTransitionTime":"2026-01-23T14:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.609661 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.609717 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.609734 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.609761 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.609778 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:42Z","lastTransitionTime":"2026-01-23T14:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.675874 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 01:39:32.504875224 +0000 UTC Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.712347 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.712431 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.712445 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.712463 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.712476 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:42Z","lastTransitionTime":"2026-01-23T14:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.815116 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.815163 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.815183 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.815203 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.815214 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:42Z","lastTransitionTime":"2026-01-23T14:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.918920 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.918970 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.918982 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.919001 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.919016 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:42Z","lastTransitionTime":"2026-01-23T14:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.972116 4775 generic.go:334] "Generic (PLEG): container finished" podID="3dd95cd2-5d8c-4e14-bc94-67bb80749037" containerID="cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988" exitCode=0 Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.972173 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" event={"ID":"3dd95cd2-5d8c-4e14-bc94-67bb80749037","Type":"ContainerDied","Data":"cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988"} Jan 23 14:04:42 crc kubenswrapper[4775]: I0123 14:04:42.988872 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:42Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.002947 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.015568 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.022095 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.022143 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.022154 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.022174 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.022185 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:43Z","lastTransitionTime":"2026-01-23T14:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.028903 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.081443 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.104298 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.122518 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.125004 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.125184 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.125324 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.125454 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.125580 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:43Z","lastTransitionTime":"2026-01-23T14:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.133431 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.152888 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.165048 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.175378 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.187737 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.198683 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.208437 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.226748 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.227364 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.227404 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.227413 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.227428 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.227437 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:43Z","lastTransitionTime":"2026-01-23T14:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.330238 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.330698 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.330710 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.330732 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.330747 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:43Z","lastTransitionTime":"2026-01-23T14:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.433946 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.433989 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.434001 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.434019 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.434031 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:43Z","lastTransitionTime":"2026-01-23T14:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.521969 4775 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.541905 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.541936 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.541947 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.541963 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.541975 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:43Z","lastTransitionTime":"2026-01-23T14:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.643957 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.643989 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.644011 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.644030 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.644042 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:43Z","lastTransitionTime":"2026-01-23T14:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.676959 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 08:22:46.144080737 +0000 UTC Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.713508 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.713606 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.713529 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:04:43 crc kubenswrapper[4775]: E0123 14:04:43.713698 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:04:43 crc kubenswrapper[4775]: E0123 14:04:43.713831 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:04:43 crc kubenswrapper[4775]: E0123 14:04:43.713926 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.732909 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.747176 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.747251 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.747268 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.747288 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.747304 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:43Z","lastTransitionTime":"2026-01-23T14:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.751632 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.768230 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.794152 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.814411 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.831307 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.853127 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.854061 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.854109 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.854122 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.854140 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.854155 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:43Z","lastTransitionTime":"2026-01-23T14:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.871680 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.885831 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.907042 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.919603 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.937172 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.957257 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.957360 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.957374 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.957406 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.957419 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:43Z","lastTransitionTime":"2026-01-23T14:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.959299 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.973581 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.980380 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerStarted","Data":"ee53a36fd4e619c3304bba625f006b04da1798421f233f341251c9fd5a17cf9e"} Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.980769 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.980848 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.980862 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.984963 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" event={"ID":"3dd95cd2-5d8c-4e14-bc94-67bb80749037","Type":"ContainerStarted","Data":"2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060"} Jan 23 14:04:43 crc kubenswrapper[4775]: I0123 14:04:43.990698 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.005608 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.011570 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.013142 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.026183 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.038142 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.058996 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee53a36fd4e619c3304bba625f006b04da1798421f233f341251c9fd5a17cf9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.059940 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.059980 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.059992 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.060009 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.060021 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:44Z","lastTransitionTime":"2026-01-23T14:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.069606 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.086940 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.104617 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.120739 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.135710 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.155061 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.163146 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.163193 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.163202 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.163219 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.163231 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:44Z","lastTransitionTime":"2026-01-23T14:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.171055 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.183678 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.205503 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.218943 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.229822 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.246364 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.258324 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.265224 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.265257 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.265266 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.265279 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.265289 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:44Z","lastTransitionTime":"2026-01-23T14:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.270195 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.284537 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.305333 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.319031 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.336134 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.355550 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.367945 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.367996 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.368008 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.368031 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.368044 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:44Z","lastTransitionTime":"2026-01-23T14:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.370663 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.383968 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.399732 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.411225 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.424086 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.443077 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee53a36fd4e619c3304bba625f006b04da1798421f233f341251c9fd5a17cf9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.457830 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:44Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.470628 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.470678 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.470696 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.470719 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.470735 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:44Z","lastTransitionTime":"2026-01-23T14:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.573784 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.573854 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.573867 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.573885 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.573895 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:44Z","lastTransitionTime":"2026-01-23T14:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.677109 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 05:48:02.468263613 +0000 UTC Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.677780 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.677884 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.677906 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.677933 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.677950 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:44Z","lastTransitionTime":"2026-01-23T14:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.779991 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.780067 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.780092 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.780122 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.780147 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:44Z","lastTransitionTime":"2026-01-23T14:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.882749 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.882907 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.882982 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.883018 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.883093 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:44Z","lastTransitionTime":"2026-01-23T14:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.987137 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.987206 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.987224 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.987249 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.987267 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:44Z","lastTransitionTime":"2026-01-23T14:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.988784 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.988866 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.988889 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.988917 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:44 crc kubenswrapper[4775]: I0123 14:04:44.988939 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:44Z","lastTransitionTime":"2026-01-23T14:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:45 crc kubenswrapper[4775]: E0123 14:04:45.007915 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:45Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.012637 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.012754 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.012776 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.012830 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.012859 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:45Z","lastTransitionTime":"2026-01-23T14:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:45 crc kubenswrapper[4775]: E0123 14:04:45.035376 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:45Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.039563 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.039616 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.039630 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.039650 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.039662 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:45Z","lastTransitionTime":"2026-01-23T14:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:45 crc kubenswrapper[4775]: E0123 14:04:45.054040 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:45Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.058628 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.058669 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.058679 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.058698 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.058710 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:45Z","lastTransitionTime":"2026-01-23T14:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:45 crc kubenswrapper[4775]: E0123 14:04:45.075296 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:45Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.080373 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.080423 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.080436 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.080455 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.080469 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:45Z","lastTransitionTime":"2026-01-23T14:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:45 crc kubenswrapper[4775]: E0123 14:04:45.095966 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:45Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:45 crc kubenswrapper[4775]: E0123 14:04:45.096138 4775 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.099763 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.099833 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.099847 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.099862 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.099875 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:45Z","lastTransitionTime":"2026-01-23T14:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.202173 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.202216 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.202224 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.202240 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.202250 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:45Z","lastTransitionTime":"2026-01-23T14:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.304664 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.304703 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.304713 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.304731 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.304742 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:45Z","lastTransitionTime":"2026-01-23T14:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.406816 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.406856 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.406866 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.406881 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.406892 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:45Z","lastTransitionTime":"2026-01-23T14:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.508722 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.508763 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.508776 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.508793 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.508821 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:45Z","lastTransitionTime":"2026-01-23T14:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.611721 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.611769 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.611777 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.611794 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.611823 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:45Z","lastTransitionTime":"2026-01-23T14:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.677576 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 16:46:47.758020107 +0000 UTC Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.713162 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.713214 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:04:45 crc kubenswrapper[4775]: E0123 14:04:45.713303 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.713341 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:04:45 crc kubenswrapper[4775]: E0123 14:04:45.713524 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:04:45 crc kubenswrapper[4775]: E0123 14:04:45.713643 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.714723 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.714776 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.714794 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.714861 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.714889 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:45Z","lastTransitionTime":"2026-01-23T14:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.817921 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.817969 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.817982 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.818002 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.818016 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:45Z","lastTransitionTime":"2026-01-23T14:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.920388 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.920430 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.920442 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.920458 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:45 crc kubenswrapper[4775]: I0123 14:04:45.920468 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:45Z","lastTransitionTime":"2026-01-23T14:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.022984 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.023368 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.023383 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.023400 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.023413 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:46Z","lastTransitionTime":"2026-01-23T14:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.126161 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.126222 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.126232 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.126247 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.126277 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:46Z","lastTransitionTime":"2026-01-23T14:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.229173 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.229269 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.229287 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.229315 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.229332 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:46Z","lastTransitionTime":"2026-01-23T14:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.332597 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.332651 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.332669 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.332692 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.332709 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:46Z","lastTransitionTime":"2026-01-23T14:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.440317 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.440385 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.440403 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.440428 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.440444 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:46Z","lastTransitionTime":"2026-01-23T14:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.542930 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.542998 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.543022 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.543053 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.543075 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:46Z","lastTransitionTime":"2026-01-23T14:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.646241 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.646287 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.646303 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.646325 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.646339 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:46Z","lastTransitionTime":"2026-01-23T14:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.678707 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 15:06:27.931427106 +0000 UTC Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.748667 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.748720 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.748735 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.748756 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.748770 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:46Z","lastTransitionTime":"2026-01-23T14:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.852124 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.852231 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.852266 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.852300 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.852323 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:46Z","lastTransitionTime":"2026-01-23T14:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.955179 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.955235 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.955253 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.955278 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.955298 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:46Z","lastTransitionTime":"2026-01-23T14:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:46 crc kubenswrapper[4775]: I0123 14:04:46.997232 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qrvs8_bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06/ovnkube-controller/0.log" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.001419 4775 generic.go:334] "Generic (PLEG): container finished" podID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerID="ee53a36fd4e619c3304bba625f006b04da1798421f233f341251c9fd5a17cf9e" exitCode=1 Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.001481 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerDied","Data":"ee53a36fd4e619c3304bba625f006b04da1798421f233f341251c9fd5a17cf9e"} Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.002633 4775 scope.go:117] "RemoveContainer" containerID="ee53a36fd4e619c3304bba625f006b04da1798421f233f341251c9fd5a17cf9e" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.021300 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.052116 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee53a36fd4e619c3304bba625f006b04da1798421f233f341251c9fd5a17cf9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee53a36fd4e619c3304bba625f006b04da1798421f233f341251c9fd5a17cf9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:04:46Z\\\",\\\"message\\\":\\\" 6102 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 14:04:45.926481 6102 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 14:04:45.926527 6102 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 14:04:45.926550 6102 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 14:04:45.926593 6102 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 14:04:45.926635 6102 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 14:04:45.926609 6102 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 14:04:45.926655 6102 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 14:04:45.926678 6102 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 14:04:45.926719 6102 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 14:04:45.926733 6102 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 14:04:45.926720 6102 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 14:04:45.926761 6102 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 14:04:45.926770 6102 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 14:04:45.926799 6102 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 14:04:45.926769 6102 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.059632 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.059694 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.059721 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.059752 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.059775 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:47Z","lastTransitionTime":"2026-01-23T14:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.074435 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.091019 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.105696 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.125381 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.146609 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.164077 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.164136 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.164148 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.164173 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.164188 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:47Z","lastTransitionTime":"2026-01-23T14:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.170327 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.194265 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.211404 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.229409 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.244480 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.258330 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.267672 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.267726 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.267749 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.267777 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.267859 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:47Z","lastTransitionTime":"2026-01-23T14:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.275570 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.296821 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.370480 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.370545 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.370564 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.370591 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.370611 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:47Z","lastTransitionTime":"2026-01-23T14:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.452922 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.474028 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.474090 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.474108 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.474131 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.474146 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:47Z","lastTransitionTime":"2026-01-23T14:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.477006 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.499834 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.515000 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.515186 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:04:47 crc kubenswrapper[4775]: E0123 14:04:47.515226 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:05:03.515194093 +0000 UTC m=+50.510022843 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.515261 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.515301 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.515338 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:47 crc kubenswrapper[4775]: E0123 14:04:47.515412 4775 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 14:04:47 crc kubenswrapper[4775]: E0123 14:04:47.515415 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 14:04:47 crc kubenswrapper[4775]: E0123 14:04:47.515447 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 14:04:47 crc kubenswrapper[4775]: E0123 14:04:47.515460 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 14:05:03.51545118 +0000 UTC m=+50.510279930 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 14:04:47 crc kubenswrapper[4775]: E0123 14:04:47.515471 4775 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:47 crc kubenswrapper[4775]: E0123 14:04:47.515537 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 14:05:03.515516102 +0000 UTC m=+50.510344882 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:47 crc kubenswrapper[4775]: E0123 14:04:47.515646 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 14:04:47 crc kubenswrapper[4775]: E0123 14:04:47.515744 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 14:04:47 crc kubenswrapper[4775]: E0123 14:04:47.515775 4775 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:47 crc kubenswrapper[4775]: E0123 14:04:47.515699 4775 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 14:04:47 crc kubenswrapper[4775]: E0123 14:04:47.515934 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 14:05:03.515900252 +0000 UTC m=+50.510729032 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:04:47 crc kubenswrapper[4775]: E0123 14:04:47.516059 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 14:05:03.515973394 +0000 UTC m=+50.510802164 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.524556 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.543846 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.563620 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.576981 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.577037 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.577053 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.577081 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.577102 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:47Z","lastTransitionTime":"2026-01-23T14:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.583921 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.607437 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.625793 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.652992 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.668924 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.679349 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 22:43:34.23487583 +0000 UTC Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.679941 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.679980 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.679990 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.680013 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.680025 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:47Z","lastTransitionTime":"2026-01-23T14:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.680400 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.693410 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.706095 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.713546 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.713574 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.713618 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:04:47 crc kubenswrapper[4775]: E0123 14:04:47.713688 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:04:47 crc kubenswrapper[4775]: E0123 14:04:47.713839 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:04:47 crc kubenswrapper[4775]: E0123 14:04:47.713977 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.717180 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.740137 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee53a36fd4e619c3304bba625f006b04da1798421f233f341251c9fd5a17cf9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee53a36fd4e619c3304bba625f006b04da1798421f233f341251c9fd5a17cf9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:04:46Z\\\",\\\"message\\\":\\\" 6102 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 14:04:45.926481 6102 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 14:04:45.926527 6102 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 14:04:45.926550 6102 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 14:04:45.926593 6102 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 14:04:45.926635 6102 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 14:04:45.926609 6102 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 14:04:45.926655 6102 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 14:04:45.926678 6102 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 14:04:45.926719 6102 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 14:04:45.926733 6102 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 14:04:45.926720 6102 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 14:04:45.926761 6102 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 14:04:45.926770 6102 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 14:04:45.926799 6102 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 14:04:45.926769 6102 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:47Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.784126 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.784181 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.784195 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.784214 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.784230 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:47Z","lastTransitionTime":"2026-01-23T14:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.887138 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.887191 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.887211 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.887237 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.887259 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:47Z","lastTransitionTime":"2026-01-23T14:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.990650 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.990718 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.990739 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.990770 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:47 crc kubenswrapper[4775]: I0123 14:04:47.990792 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:47Z","lastTransitionTime":"2026-01-23T14:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.006032 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qrvs8_bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06/ovnkube-controller/0.log" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.008628 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerStarted","Data":"e859953b87c3a3d0413118cd0c2f199cb6576dc3f9f136effb8ac6059d9d74d5"} Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.092878 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.092918 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.092931 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.092948 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.092958 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:48Z","lastTransitionTime":"2026-01-23T14:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.195876 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.195933 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.195945 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.195961 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.195971 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:48Z","lastTransitionTime":"2026-01-23T14:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.299311 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.299386 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.299413 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.299445 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.299469 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:48Z","lastTransitionTime":"2026-01-23T14:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.402049 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.402112 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.402130 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.402158 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.402177 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:48Z","lastTransitionTime":"2026-01-23T14:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.505541 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.505607 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.505620 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.505642 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.505655 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:48Z","lastTransitionTime":"2026-01-23T14:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.608153 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.608201 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.608213 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.608231 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.608242 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:48Z","lastTransitionTime":"2026-01-23T14:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.687230 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 11:28:18.571652395 +0000 UTC Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.710347 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.710409 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.710425 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.710450 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.710468 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:48Z","lastTransitionTime":"2026-01-23T14:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.812846 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.812907 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.812924 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.812949 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.812967 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:48Z","lastTransitionTime":"2026-01-23T14:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.915303 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.915347 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.915358 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.915373 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:48 crc kubenswrapper[4775]: I0123 14:04:48.915384 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:48Z","lastTransitionTime":"2026-01-23T14:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.011988 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.017215 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.017264 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.017279 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.017297 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.017312 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:49Z","lastTransitionTime":"2026-01-23T14:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.030140 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:49Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.086005 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:49Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.102603 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:49Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.120242 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.120288 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.120300 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.120320 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.120333 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:49Z","lastTransitionTime":"2026-01-23T14:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.128428 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:49Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.143284 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:49Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.155451 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:49Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.170329 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:49Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.183390 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:49Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.195304 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:49Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.212229 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:49Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.222454 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.222486 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.222499 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.222515 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.222527 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:49Z","lastTransitionTime":"2026-01-23T14:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.241650 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:49Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.253051 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:49Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.273957 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e859953b87c3a3d0413118cd0c2f199cb6576dc3f9f136effb8ac6059d9d74d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee53a36fd4e619c3304bba625f006b04da1798421f233f341251c9fd5a17cf9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:04:46Z\\\",\\\"message\\\":\\\" 6102 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 14:04:45.926481 6102 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 14:04:45.926527 6102 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 14:04:45.926550 6102 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 14:04:45.926593 6102 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 14:04:45.926635 6102 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 14:04:45.926609 6102 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 14:04:45.926655 6102 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 14:04:45.926678 6102 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 14:04:45.926719 6102 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 14:04:45.926733 6102 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 14:04:45.926720 6102 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 14:04:45.926761 6102 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 14:04:45.926770 6102 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 14:04:45.926799 6102 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 14:04:45.926769 6102 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:49Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.290115 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:49Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.305069 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:49Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.325166 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.325223 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.325242 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.325267 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.325284 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:49Z","lastTransitionTime":"2026-01-23T14:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.428655 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.428716 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.428732 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.428757 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.428777 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:49Z","lastTransitionTime":"2026-01-23T14:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.532923 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.533018 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.533043 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.533077 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.533107 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:49Z","lastTransitionTime":"2026-01-23T14:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.636688 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.636750 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.636767 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.636847 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.636866 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:49Z","lastTransitionTime":"2026-01-23T14:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.687922 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 04:46:22.576718274 +0000 UTC Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.713576 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.713645 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.713716 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:04:49 crc kubenswrapper[4775]: E0123 14:04:49.713913 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:04:49 crc kubenswrapper[4775]: E0123 14:04:49.714014 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:04:49 crc kubenswrapper[4775]: E0123 14:04:49.714132 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.739143 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.739175 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.739183 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.739197 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.739206 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:49Z","lastTransitionTime":"2026-01-23T14:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.842888 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.842942 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.842963 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.842993 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.843015 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:49Z","lastTransitionTime":"2026-01-23T14:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.869543 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw"] Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.870151 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.873159 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.873614 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.886542 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:49Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.919998 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e859953b87c3a3d0413118cd0c2f199cb6576dc3f9f136effb8ac6059d9d74d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee53a36fd4e619c3304bba625f006b04da1798421f233f341251c9fd5a17cf9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:04:46Z\\\",\\\"message\\\":\\\" 6102 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 14:04:45.926481 6102 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 14:04:45.926527 6102 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 14:04:45.926550 6102 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 14:04:45.926593 6102 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 14:04:45.926635 6102 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 14:04:45.926609 6102 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 14:04:45.926655 6102 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 14:04:45.926678 6102 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 14:04:45.926719 6102 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 14:04:45.926733 6102 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 14:04:45.926720 6102 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 14:04:45.926761 6102 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 14:04:45.926770 6102 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 14:04:45.926799 6102 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 14:04:45.926769 6102 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:49Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.937436 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:49Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.944391 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9faab1b3-3f25-40a9-852f-64e14dd51f6b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-z55mw\" (UID: \"9faab1b3-3f25-40a9-852f-64e14dd51f6b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.944435 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95ckj\" (UniqueName: \"kubernetes.io/projected/9faab1b3-3f25-40a9-852f-64e14dd51f6b-kube-api-access-95ckj\") pod \"ovnkube-control-plane-749d76644c-z55mw\" (UID: \"9faab1b3-3f25-40a9-852f-64e14dd51f6b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.944480 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9faab1b3-3f25-40a9-852f-64e14dd51f6b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-z55mw\" (UID: \"9faab1b3-3f25-40a9-852f-64e14dd51f6b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.944514 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9faab1b3-3f25-40a9-852f-64e14dd51f6b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-z55mw\" (UID: \"9faab1b3-3f25-40a9-852f-64e14dd51f6b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.945676 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.945719 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.945727 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.945742 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.945753 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:49Z","lastTransitionTime":"2026-01-23T14:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.950441 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:49Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.968060 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:49Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.982797 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:49Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:49 crc kubenswrapper[4775]: I0123 14:04:49.996977 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:49Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.012562 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.017202 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qrvs8_bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06/ovnkube-controller/1.log" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.017857 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qrvs8_bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06/ovnkube-controller/0.log" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.020861 4775 generic.go:334] "Generic (PLEG): container finished" podID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerID="e859953b87c3a3d0413118cd0c2f199cb6576dc3f9f136effb8ac6059d9d74d5" exitCode=1 Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.020911 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerDied","Data":"e859953b87c3a3d0413118cd0c2f199cb6576dc3f9f136effb8ac6059d9d74d5"} Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.020985 4775 scope.go:117] "RemoveContainer" containerID="ee53a36fd4e619c3304bba625f006b04da1798421f233f341251c9fd5a17cf9e" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.021650 4775 scope.go:117] "RemoveContainer" containerID="e859953b87c3a3d0413118cd0c2f199cb6576dc3f9f136effb8ac6059d9d74d5" Jan 23 14:04:50 crc kubenswrapper[4775]: E0123 14:04:50.021965 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qrvs8_openshift-ovn-kubernetes(bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.030248 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.045971 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9faab1b3-3f25-40a9-852f-64e14dd51f6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z55mw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.046063 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9faab1b3-3f25-40a9-852f-64e14dd51f6b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-z55mw\" (UID: \"9faab1b3-3f25-40a9-852f-64e14dd51f6b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.046179 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9faab1b3-3f25-40a9-852f-64e14dd51f6b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-z55mw\" (UID: \"9faab1b3-3f25-40a9-852f-64e14dd51f6b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.046268 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9faab1b3-3f25-40a9-852f-64e14dd51f6b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-z55mw\" (UID: \"9faab1b3-3f25-40a9-852f-64e14dd51f6b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.046307 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95ckj\" (UniqueName: \"kubernetes.io/projected/9faab1b3-3f25-40a9-852f-64e14dd51f6b-kube-api-access-95ckj\") pod \"ovnkube-control-plane-749d76644c-z55mw\" (UID: \"9faab1b3-3f25-40a9-852f-64e14dd51f6b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.047068 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9faab1b3-3f25-40a9-852f-64e14dd51f6b-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-z55mw\" (UID: \"9faab1b3-3f25-40a9-852f-64e14dd51f6b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.047080 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9faab1b3-3f25-40a9-852f-64e14dd51f6b-env-overrides\") pod \"ovnkube-control-plane-749d76644c-z55mw\" (UID: \"9faab1b3-3f25-40a9-852f-64e14dd51f6b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.048037 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.048070 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.048083 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.048102 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.048116 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:50Z","lastTransitionTime":"2026-01-23T14:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.062041 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9faab1b3-3f25-40a9-852f-64e14dd51f6b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-z55mw\" (UID: \"9faab1b3-3f25-40a9-852f-64e14dd51f6b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.062522 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.068271 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95ckj\" (UniqueName: \"kubernetes.io/projected/9faab1b3-3f25-40a9-852f-64e14dd51f6b-kube-api-access-95ckj\") pod \"ovnkube-control-plane-749d76644c-z55mw\" (UID: \"9faab1b3-3f25-40a9-852f-64e14dd51f6b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.078823 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.093538 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.106930 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.124257 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.149028 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.151098 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.151340 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.151468 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.151590 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.151895 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:50Z","lastTransitionTime":"2026-01-23T14:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.167641 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.183085 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.191561 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.197452 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9faab1b3-3f25-40a9-852f-64e14dd51f6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z55mw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.213698 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.231159 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.256293 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.256381 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.256406 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.256441 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.256468 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:50Z","lastTransitionTime":"2026-01-23T14:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.268745 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.286347 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.314090 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e859953b87c3a3d0413118cd0c2f199cb6576dc3f9f136effb8ac6059d9d74d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee53a36fd4e619c3304bba625f006b04da1798421f233f341251c9fd5a17cf9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:04:46Z\\\",\\\"message\\\":\\\" 6102 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 14:04:45.926481 6102 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 14:04:45.926527 6102 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 14:04:45.926550 6102 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 14:04:45.926593 6102 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 14:04:45.926635 6102 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 14:04:45.926609 6102 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 14:04:45.926655 6102 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 14:04:45.926678 6102 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 14:04:45.926719 6102 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 14:04:45.926733 6102 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 14:04:45.926720 6102 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 14:04:45.926761 6102 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 14:04:45.926770 6102 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 14:04:45.926799 6102 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 14:04:45.926769 6102 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e859953b87c3a3d0413118cd0c2f199cb6576dc3f9f136effb8ac6059d9d74d5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"message\\\":\\\"nsole-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0123 14:04:49.017949 6230 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/package-server-manager-metrics\\\\\\\"}\\\\nI0123 14:04:49.017910 6230 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-controllers]} name:Service_openshift-machine-api/machine-api-controllers_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.167:8441: 10.217.4.167:8442: 10.217.4.167:8444:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {62af83f3-e0c8-4632-aaaa-17488566a9d8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 14:04:49.017979 6230 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to sh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.336933 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.358517 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.359894 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.360019 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.360188 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.360227 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.360243 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:50Z","lastTransitionTime":"2026-01-23T14:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.371828 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.385552 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.396657 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.411390 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.432452 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.450367 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.463443 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.463491 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.463502 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.463523 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.463535 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:50Z","lastTransitionTime":"2026-01-23T14:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.566627 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.567212 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.567231 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.567258 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.567277 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:50Z","lastTransitionTime":"2026-01-23T14:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.639241 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-47lz2"] Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.639756 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:04:50 crc kubenswrapper[4775]: E0123 14:04:50.639831 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.656446 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.670303 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.670286 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.670345 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.670356 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.670371 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.670388 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:50Z","lastTransitionTime":"2026-01-23T14:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.688065 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 22:26:29.757848991 +0000 UTC Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.694642 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.714162 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.731257 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e859953b87c3a3d0413118cd0c2f199cb6576dc3f9f136effb8ac6059d9d74d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee53a36fd4e619c3304bba625f006b04da1798421f233f341251c9fd5a17cf9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:04:46Z\\\",\\\"message\\\":\\\" 6102 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0123 14:04:45.926481 6102 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0123 14:04:45.926527 6102 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0123 14:04:45.926550 6102 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0123 14:04:45.926593 6102 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0123 14:04:45.926635 6102 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0123 14:04:45.926609 6102 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0123 14:04:45.926655 6102 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0123 14:04:45.926678 6102 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 14:04:45.926719 6102 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0123 14:04:45.926733 6102 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0123 14:04:45.926720 6102 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0123 14:04:45.926761 6102 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 14:04:45.926770 6102 handler.go:208] Removed *v1.Node event handler 2\\\\nI0123 14:04:45.926799 6102 handler.go:208] Removed *v1.Node event handler 7\\\\nI0123 14:04:45.926769 6102 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e859953b87c3a3d0413118cd0c2f199cb6576dc3f9f136effb8ac6059d9d74d5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"message\\\":\\\"nsole-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0123 14:04:49.017949 6230 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/package-server-manager-metrics\\\\\\\"}\\\\nI0123 14:04:49.017910 6230 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-controllers]} name:Service_openshift-machine-api/machine-api-controllers_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.167:8441: 10.217.4.167:8442: 10.217.4.167:8444:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {62af83f3-e0c8-4632-aaaa-17488566a9d8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 14:04:49.017979 6230 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to sh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.745058 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.753077 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs\") pod \"network-metrics-daemon-47lz2\" (UID: \"63ed1a97-c97e-40d0-afdf-260c475dc83f\") " pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.753131 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgjq7\" (UniqueName: \"kubernetes.io/projected/63ed1a97-c97e-40d0-afdf-260c475dc83f-kube-api-access-cgjq7\") pod \"network-metrics-daemon-47lz2\" (UID: \"63ed1a97-c97e-40d0-afdf-260c475dc83f\") " pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.755719 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.768098 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.772322 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.772362 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.772372 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.772390 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.772400 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:50Z","lastTransitionTime":"2026-01-23T14:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.781335 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.792880 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.804916 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.824143 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.838079 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-47lz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ed1a97-c97e-40d0-afdf-260c475dc83f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-47lz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.850472 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.854046 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgjq7\" (UniqueName: \"kubernetes.io/projected/63ed1a97-c97e-40d0-afdf-260c475dc83f-kube-api-access-cgjq7\") pod \"network-metrics-daemon-47lz2\" (UID: \"63ed1a97-c97e-40d0-afdf-260c475dc83f\") " pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.854084 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs\") pod \"network-metrics-daemon-47lz2\" (UID: \"63ed1a97-c97e-40d0-afdf-260c475dc83f\") " pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:04:50 crc kubenswrapper[4775]: E0123 14:04:50.854185 4775 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 14:04:50 crc kubenswrapper[4775]: E0123 14:04:50.854235 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs podName:63ed1a97-c97e-40d0-afdf-260c475dc83f nodeName:}" failed. No retries permitted until 2026-01-23 14:04:51.354221941 +0000 UTC m=+38.349050681 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs") pod "network-metrics-daemon-47lz2" (UID: "63ed1a97-c97e-40d0-afdf-260c475dc83f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.869316 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.873829 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgjq7\" (UniqueName: \"kubernetes.io/projected/63ed1a97-c97e-40d0-afdf-260c475dc83f-kube-api-access-cgjq7\") pod \"network-metrics-daemon-47lz2\" (UID: \"63ed1a97-c97e-40d0-afdf-260c475dc83f\") " pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.874553 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.874576 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.874585 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.874601 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.874611 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:50Z","lastTransitionTime":"2026-01-23T14:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.887145 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.903819 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9faab1b3-3f25-40a9-852f-64e14dd51f6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z55mw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:50Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.994999 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.995061 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.995078 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.995099 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:50 crc kubenswrapper[4775]: I0123 14:04:50.995114 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:50Z","lastTransitionTime":"2026-01-23T14:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.033610 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qrvs8_bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06/ovnkube-controller/1.log" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.037840 4775 scope.go:117] "RemoveContainer" containerID="e859953b87c3a3d0413118cd0c2f199cb6576dc3f9f136effb8ac6059d9d74d5" Jan 23 14:04:51 crc kubenswrapper[4775]: E0123 14:04:51.038023 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qrvs8_openshift-ovn-kubernetes(bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.042617 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" event={"ID":"9faab1b3-3f25-40a9-852f-64e14dd51f6b","Type":"ContainerStarted","Data":"f43da97bc3001c1066778d14029bd40271ef42849a6966caaf39da7174890aa7"} Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.042705 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" event={"ID":"9faab1b3-3f25-40a9-852f-64e14dd51f6b","Type":"ContainerStarted","Data":"c3e86d8bd8f77572c3ed3ba515863b0d66b2654865e89c4b05bf47072c458b9a"} Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.042741 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" event={"ID":"9faab1b3-3f25-40a9-852f-64e14dd51f6b","Type":"ContainerStarted","Data":"2cec27a59d2b24281c4cc8f2d0fc6df782eae0440f03c9763858dbd21fd19b2f"} Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.065560 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.080419 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.094529 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.097581 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.097631 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.097644 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.097664 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.097677 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:51Z","lastTransitionTime":"2026-01-23T14:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.113392 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e859953b87c3a3d0413118cd0c2f199cb6576dc3f9f136effb8ac6059d9d74d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e859953b87c3a3d0413118cd0c2f199cb6576dc3f9f136effb8ac6059d9d74d5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"message\\\":\\\"nsole-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0123 14:04:49.017949 6230 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/package-server-manager-metrics\\\\\\\"}\\\\nI0123 14:04:49.017910 6230 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-controllers]} name:Service_openshift-machine-api/machine-api-controllers_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.167:8441: 10.217.4.167:8442: 10.217.4.167:8444:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {62af83f3-e0c8-4632-aaaa-17488566a9d8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 14:04:49.017979 6230 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to sh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qrvs8_openshift-ovn-kubernetes(bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.124260 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.138440 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.153328 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.167726 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-47lz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ed1a97-c97e-40d0-afdf-260c475dc83f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-47lz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.184049 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.197081 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.199861 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.199913 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.199937 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.199967 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.199989 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:51Z","lastTransitionTime":"2026-01-23T14:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.213163 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.231314 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.245508 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9faab1b3-3f25-40a9-852f-64e14dd51f6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z55mw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.263571 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.295714 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.303188 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.303243 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.303255 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.303273 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.303286 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:51Z","lastTransitionTime":"2026-01-23T14:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.312970 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.331209 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.350627 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.359558 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs\") pod \"network-metrics-daemon-47lz2\" (UID: \"63ed1a97-c97e-40d0-afdf-260c475dc83f\") " pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:04:51 crc kubenswrapper[4775]: E0123 14:04:51.359961 4775 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 14:04:51 crc kubenswrapper[4775]: E0123 14:04:51.360127 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs podName:63ed1a97-c97e-40d0-afdf-260c475dc83f nodeName:}" failed. No retries permitted until 2026-01-23 14:04:52.36008736 +0000 UTC m=+39.354916200 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs") pod "network-metrics-daemon-47lz2" (UID: "63ed1a97-c97e-40d0-afdf-260c475dc83f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.368390 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.383763 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.405975 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.406082 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.406102 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.406126 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.406143 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:51Z","lastTransitionTime":"2026-01-23T14:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.415266 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e859953b87c3a3d0413118cd0c2f199cb6576dc3f9f136effb8ac6059d9d74d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e859953b87c3a3d0413118cd0c2f199cb6576dc3f9f136effb8ac6059d9d74d5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"message\\\":\\\"nsole-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0123 14:04:49.017949 6230 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/package-server-manager-metrics\\\\\\\"}\\\\nI0123 14:04:49.017910 6230 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-controllers]} name:Service_openshift-machine-api/machine-api-controllers_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.167:8441: 10.217.4.167:8442: 10.217.4.167:8444:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {62af83f3-e0c8-4632-aaaa-17488566a9d8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 14:04:49.017979 6230 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to sh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qrvs8_openshift-ovn-kubernetes(bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.433544 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.449226 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-47lz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ed1a97-c97e-40d0-afdf-260c475dc83f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-47lz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.465483 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.483269 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.500260 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.508291 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.508333 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.508344 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.508360 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.508370 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:51Z","lastTransitionTime":"2026-01-23T14:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.524172 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.541754 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.559957 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.576331 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9faab1b3-3f25-40a9-852f-64e14dd51f6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e86d8bd8f77572c3ed3ba515863b0d66b2654865e89c4b05bf47072c458b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f43da97bc3001c1066778d14029bd40271ef42849a6966caaf39da7174890aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z55mw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.593905 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.610520 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.610591 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.610616 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.610649 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.610673 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:51Z","lastTransitionTime":"2026-01-23T14:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.616819 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.636337 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.651132 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:51Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.688532 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 11:46:27.559758946 +0000 UTC Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.713062 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:04:51 crc kubenswrapper[4775]: E0123 14:04:51.713286 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.713564 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:04:51 crc kubenswrapper[4775]: E0123 14:04:51.714034 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.714307 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.714357 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.714369 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.714388 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.714400 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:51Z","lastTransitionTime":"2026-01-23T14:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.714040 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:51 crc kubenswrapper[4775]: E0123 14:04:51.714652 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.818033 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.818096 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.818113 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.818140 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.818159 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:51Z","lastTransitionTime":"2026-01-23T14:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.920899 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.920944 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.920956 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.920973 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:51 crc kubenswrapper[4775]: I0123 14:04:51.920983 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:51Z","lastTransitionTime":"2026-01-23T14:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.023591 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.023874 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.023964 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.024054 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.024185 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:52Z","lastTransitionTime":"2026-01-23T14:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.127336 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.127399 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.127417 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.127441 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.127457 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:52Z","lastTransitionTime":"2026-01-23T14:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.230892 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.231215 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.231397 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.231536 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.231674 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:52Z","lastTransitionTime":"2026-01-23T14:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.334953 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.335256 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.335343 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.335441 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.335525 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:52Z","lastTransitionTime":"2026-01-23T14:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.375261 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs\") pod \"network-metrics-daemon-47lz2\" (UID: \"63ed1a97-c97e-40d0-afdf-260c475dc83f\") " pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:04:52 crc kubenswrapper[4775]: E0123 14:04:52.375575 4775 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 14:04:52 crc kubenswrapper[4775]: E0123 14:04:52.375692 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs podName:63ed1a97-c97e-40d0-afdf-260c475dc83f nodeName:}" failed. No retries permitted until 2026-01-23 14:04:54.375665149 +0000 UTC m=+41.370493929 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs") pod "network-metrics-daemon-47lz2" (UID: "63ed1a97-c97e-40d0-afdf-260c475dc83f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.439041 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.439106 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.439126 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.439153 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.439168 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:52Z","lastTransitionTime":"2026-01-23T14:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.543074 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.543189 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.543207 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.543228 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.543241 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:52Z","lastTransitionTime":"2026-01-23T14:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.646962 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.647468 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.647676 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.647867 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.648242 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:52Z","lastTransitionTime":"2026-01-23T14:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.689450 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 06:01:18.208473788 +0000 UTC Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.712936 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:04:52 crc kubenswrapper[4775]: E0123 14:04:52.713303 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.751756 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.751878 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.751899 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.751925 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.751944 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:52Z","lastTransitionTime":"2026-01-23T14:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.855004 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.855067 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.855083 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.855106 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.855120 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:52Z","lastTransitionTime":"2026-01-23T14:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.958452 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.958844 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.958956 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.959056 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:52 crc kubenswrapper[4775]: I0123 14:04:52.959239 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:52Z","lastTransitionTime":"2026-01-23T14:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.062308 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.062662 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.062798 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.063001 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.063145 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:53Z","lastTransitionTime":"2026-01-23T14:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.167378 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.167428 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.167445 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.167467 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.167483 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:53Z","lastTransitionTime":"2026-01-23T14:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.270020 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.270388 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.270537 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.270691 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.270979 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:53Z","lastTransitionTime":"2026-01-23T14:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.374058 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.374132 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.374154 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.374184 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.374208 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:53Z","lastTransitionTime":"2026-01-23T14:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.476734 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.476778 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.476788 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.476821 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.476832 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:53Z","lastTransitionTime":"2026-01-23T14:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.579979 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.580026 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.580038 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.580059 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.580080 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:53Z","lastTransitionTime":"2026-01-23T14:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.683121 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.683206 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.683223 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.683248 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.683265 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:53Z","lastTransitionTime":"2026-01-23T14:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.689589 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 20:11:06.251875357 +0000 UTC Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.713096 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.713271 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:04:53 crc kubenswrapper[4775]: E0123 14:04:53.713500 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.713555 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:04:53 crc kubenswrapper[4775]: E0123 14:04:53.713727 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:04:53 crc kubenswrapper[4775]: E0123 14:04:53.714059 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.734786 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:53Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.750915 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:53Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.767613 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9faab1b3-3f25-40a9-852f-64e14dd51f6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e86d8bd8f77572c3ed3ba515863b0d66b2654865e89c4b05bf47072c458b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f43da97bc3001c1066778d14029bd40271ef42849a6966caaf39da7174890aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z55mw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:53Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.787308 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.787383 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.787408 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.787442 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.787465 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:53Z","lastTransitionTime":"2026-01-23T14:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.787565 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:53Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.820652 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:53Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.832640 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:53Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.842214 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:53Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.855040 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:53Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.866569 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:53Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.877753 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:53Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.890820 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.890868 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.890884 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.890905 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.890920 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:53Z","lastTransitionTime":"2026-01-23T14:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.901076 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e859953b87c3a3d0413118cd0c2f199cb6576dc3f9f136effb8ac6059d9d74d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e859953b87c3a3d0413118cd0c2f199cb6576dc3f9f136effb8ac6059d9d74d5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"message\\\":\\\"nsole-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0123 14:04:49.017949 6230 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/package-server-manager-metrics\\\\\\\"}\\\\nI0123 14:04:49.017910 6230 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-controllers]} name:Service_openshift-machine-api/machine-api-controllers_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.167:8441: 10.217.4.167:8442: 10.217.4.167:8444:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {62af83f3-e0c8-4632-aaaa-17488566a9d8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 14:04:49.017979 6230 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to sh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qrvs8_openshift-ovn-kubernetes(bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:53Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.919979 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:53Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.947330 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:53Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.962499 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-47lz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ed1a97-c97e-40d0-afdf-260c475dc83f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-47lz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:53Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.993326 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.993384 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.993399 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.993420 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:53 crc kubenswrapper[4775]: I0123 14:04:53.993436 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:53Z","lastTransitionTime":"2026-01-23T14:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.003511 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:54Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.027251 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:54Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.038551 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:54Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.096379 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.096427 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.096440 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.096461 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.096475 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:54Z","lastTransitionTime":"2026-01-23T14:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.199852 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.199921 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.199957 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.199982 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.199998 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:54Z","lastTransitionTime":"2026-01-23T14:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.302629 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.302688 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.302704 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.302728 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.302745 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:54Z","lastTransitionTime":"2026-01-23T14:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.404157 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs\") pod \"network-metrics-daemon-47lz2\" (UID: \"63ed1a97-c97e-40d0-afdf-260c475dc83f\") " pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:04:54 crc kubenswrapper[4775]: E0123 14:04:54.404411 4775 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 14:04:54 crc kubenswrapper[4775]: E0123 14:04:54.404523 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs podName:63ed1a97-c97e-40d0-afdf-260c475dc83f nodeName:}" failed. No retries permitted until 2026-01-23 14:04:58.404492866 +0000 UTC m=+45.399321646 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs") pod "network-metrics-daemon-47lz2" (UID: "63ed1a97-c97e-40d0-afdf-260c475dc83f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.406181 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.406289 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.406306 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.406366 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.406401 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:54Z","lastTransitionTime":"2026-01-23T14:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.509724 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.509793 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.509862 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.509917 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.509940 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:54Z","lastTransitionTime":"2026-01-23T14:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.613490 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.613549 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.613566 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.613590 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.613607 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:54Z","lastTransitionTime":"2026-01-23T14:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.693540 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 21:40:19.496234121 +0000 UTC Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.713169 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:04:54 crc kubenswrapper[4775]: E0123 14:04:54.713391 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.715894 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.715968 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.715990 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.716015 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.716035 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:54Z","lastTransitionTime":"2026-01-23T14:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.818702 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.818757 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.818773 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.818835 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.818853 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:54Z","lastTransitionTime":"2026-01-23T14:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.922329 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.922394 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.922410 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.922433 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:54 crc kubenswrapper[4775]: I0123 14:04:54.922450 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:54Z","lastTransitionTime":"2026-01-23T14:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.025705 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.025761 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.025772 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.025790 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.025819 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:55Z","lastTransitionTime":"2026-01-23T14:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.129147 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.129190 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.129200 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.129217 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.129230 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:55Z","lastTransitionTime":"2026-01-23T14:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.232226 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.232274 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.232285 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.232302 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.232315 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:55Z","lastTransitionTime":"2026-01-23T14:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.331861 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.331907 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.331921 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.331941 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.331954 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:55Z","lastTransitionTime":"2026-01-23T14:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:55 crc kubenswrapper[4775]: E0123 14:04:55.349908 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:55Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.354458 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.354495 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.354505 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.354524 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.354537 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:55Z","lastTransitionTime":"2026-01-23T14:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:55 crc kubenswrapper[4775]: E0123 14:04:55.370706 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:55Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.375053 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.375090 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.375098 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.375113 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.375124 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:55Z","lastTransitionTime":"2026-01-23T14:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:55 crc kubenswrapper[4775]: E0123 14:04:55.393931 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:55Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.398733 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.398770 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.398781 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.398811 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.398819 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:55Z","lastTransitionTime":"2026-01-23T14:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:55 crc kubenswrapper[4775]: E0123 14:04:55.414100 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:55Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.418182 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.418204 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.418212 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.418223 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.418232 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:55Z","lastTransitionTime":"2026-01-23T14:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:55 crc kubenswrapper[4775]: E0123 14:04:55.435636 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:04:55Z is after 2025-08-24T17:21:41Z" Jan 23 14:04:55 crc kubenswrapper[4775]: E0123 14:04:55.435817 4775 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.437260 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.437316 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.437328 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.437349 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.437362 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:55Z","lastTransitionTime":"2026-01-23T14:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.540130 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.540180 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.540193 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.540210 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.540223 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:55Z","lastTransitionTime":"2026-01-23T14:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.643270 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.643325 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.643334 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.643352 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.643363 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:55Z","lastTransitionTime":"2026-01-23T14:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.693886 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 23:33:01.854468558 +0000 UTC Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.713299 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.713378 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.713389 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:55 crc kubenswrapper[4775]: E0123 14:04:55.713508 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:04:55 crc kubenswrapper[4775]: E0123 14:04:55.713695 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:04:55 crc kubenswrapper[4775]: E0123 14:04:55.713871 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.746682 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.746743 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.746758 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.746782 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.746834 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:55Z","lastTransitionTime":"2026-01-23T14:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.850334 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.850402 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.850421 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.850453 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.850477 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:55Z","lastTransitionTime":"2026-01-23T14:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.954185 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.954253 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.954272 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.954298 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:55 crc kubenswrapper[4775]: I0123 14:04:55.954317 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:55Z","lastTransitionTime":"2026-01-23T14:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.057584 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.057649 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.057668 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.057704 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.057728 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:56Z","lastTransitionTime":"2026-01-23T14:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.160957 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.160998 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.161007 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.161021 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.161032 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:56Z","lastTransitionTime":"2026-01-23T14:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.264539 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.264624 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.264639 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.264667 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.264684 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:56Z","lastTransitionTime":"2026-01-23T14:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.369676 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.369742 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.369759 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.369795 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.369854 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:56Z","lastTransitionTime":"2026-01-23T14:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.478837 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.479139 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.479156 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.479177 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.479188 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:56Z","lastTransitionTime":"2026-01-23T14:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.582084 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.582155 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.582173 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.582198 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.582217 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:56Z","lastTransitionTime":"2026-01-23T14:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.685951 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.686042 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.686061 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.686085 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.686102 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:56Z","lastTransitionTime":"2026-01-23T14:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.694564 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 04:36:42.266968699 +0000 UTC Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.714048 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:04:56 crc kubenswrapper[4775]: E0123 14:04:56.714260 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.789101 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.789160 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.789178 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.789201 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.789218 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:56Z","lastTransitionTime":"2026-01-23T14:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.891853 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.891930 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.891965 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.891991 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.892003 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:56Z","lastTransitionTime":"2026-01-23T14:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.995457 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.995558 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.995579 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.995606 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:56 crc kubenswrapper[4775]: I0123 14:04:56.995623 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:56Z","lastTransitionTime":"2026-01-23T14:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.098334 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.098401 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.098426 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.098455 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.098472 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:57Z","lastTransitionTime":"2026-01-23T14:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.200881 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.200963 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.200980 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.201003 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.201018 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:57Z","lastTransitionTime":"2026-01-23T14:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.304401 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.304746 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.304765 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.304791 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.304843 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:57Z","lastTransitionTime":"2026-01-23T14:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.407943 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.408001 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.408017 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.408040 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.408093 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:57Z","lastTransitionTime":"2026-01-23T14:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.511030 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.511096 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.511120 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.511151 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.511173 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:57Z","lastTransitionTime":"2026-01-23T14:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.614770 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.614897 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.614921 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.614951 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.614976 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:57Z","lastTransitionTime":"2026-01-23T14:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.695776 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 03:26:52.383448779 +0000 UTC Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.713439 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:57 crc kubenswrapper[4775]: E0123 14:04:57.713617 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.713945 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:04:57 crc kubenswrapper[4775]: E0123 14:04:57.714052 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.714160 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:04:57 crc kubenswrapper[4775]: E0123 14:04:57.714435 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.719637 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.719702 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.719725 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.719754 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.719776 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:57Z","lastTransitionTime":"2026-01-23T14:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.823509 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.823576 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.823594 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.823618 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.823642 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:57Z","lastTransitionTime":"2026-01-23T14:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.927054 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.927124 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.927142 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.927167 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:57 crc kubenswrapper[4775]: I0123 14:04:57.927190 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:57Z","lastTransitionTime":"2026-01-23T14:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.031044 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.031119 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.031143 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.031177 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.031201 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:58Z","lastTransitionTime":"2026-01-23T14:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.134285 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.134354 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.134378 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.134410 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.134432 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:58Z","lastTransitionTime":"2026-01-23T14:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.237225 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.237274 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.237286 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.237304 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.237313 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:58Z","lastTransitionTime":"2026-01-23T14:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.340156 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.340206 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.340223 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.340251 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.340268 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:58Z","lastTransitionTime":"2026-01-23T14:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.443638 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.443732 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.443755 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.443786 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.443858 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:58Z","lastTransitionTime":"2026-01-23T14:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.456216 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs\") pod \"network-metrics-daemon-47lz2\" (UID: \"63ed1a97-c97e-40d0-afdf-260c475dc83f\") " pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:04:58 crc kubenswrapper[4775]: E0123 14:04:58.456433 4775 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 14:04:58 crc kubenswrapper[4775]: E0123 14:04:58.456505 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs podName:63ed1a97-c97e-40d0-afdf-260c475dc83f nodeName:}" failed. No retries permitted until 2026-01-23 14:05:06.456486429 +0000 UTC m=+53.451315169 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs") pod "network-metrics-daemon-47lz2" (UID: "63ed1a97-c97e-40d0-afdf-260c475dc83f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.546535 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.546669 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.546692 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.546715 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.546731 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:58Z","lastTransitionTime":"2026-01-23T14:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.650156 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.650214 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.650232 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.650255 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.650273 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:58Z","lastTransitionTime":"2026-01-23T14:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.696227 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 11:39:57.603004679 +0000 UTC Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.713793 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:04:58 crc kubenswrapper[4775]: E0123 14:04:58.714037 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.753362 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.753436 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.753453 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.753478 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.753495 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:58Z","lastTransitionTime":"2026-01-23T14:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.856525 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.856581 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.856593 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.856621 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.856634 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:58Z","lastTransitionTime":"2026-01-23T14:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.959539 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.959594 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.959604 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.959621 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:58 crc kubenswrapper[4775]: I0123 14:04:58.959633 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:58Z","lastTransitionTime":"2026-01-23T14:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.062439 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.062490 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.062501 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.062517 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.062527 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:59Z","lastTransitionTime":"2026-01-23T14:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.166514 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.166578 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.166602 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.166644 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.166667 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:59Z","lastTransitionTime":"2026-01-23T14:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.269917 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.269966 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.269977 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.269993 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.270007 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:59Z","lastTransitionTime":"2026-01-23T14:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.372853 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.372902 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.372913 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.372929 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.372941 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:59Z","lastTransitionTime":"2026-01-23T14:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.476878 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.476921 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.476930 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.476945 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.476955 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:59Z","lastTransitionTime":"2026-01-23T14:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.580008 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.580063 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.580079 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.580098 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.580128 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:59Z","lastTransitionTime":"2026-01-23T14:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.682473 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.682558 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.682578 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.682602 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.682620 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:59Z","lastTransitionTime":"2026-01-23T14:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.696845 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 15:14:30.167411392 +0000 UTC Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.713153 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.713218 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:04:59 crc kubenswrapper[4775]: E0123 14:04:59.713324 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.713350 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:04:59 crc kubenswrapper[4775]: E0123 14:04:59.713460 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:04:59 crc kubenswrapper[4775]: E0123 14:04:59.713669 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.785446 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.785488 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.785497 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.785515 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.785525 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:59Z","lastTransitionTime":"2026-01-23T14:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.888077 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.888120 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.888131 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.888149 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.888160 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:59Z","lastTransitionTime":"2026-01-23T14:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.991313 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.991373 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.991389 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.991413 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:04:59 crc kubenswrapper[4775]: I0123 14:04:59.991431 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:04:59Z","lastTransitionTime":"2026-01-23T14:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.093472 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.093523 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.093534 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.093553 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.093566 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:00Z","lastTransitionTime":"2026-01-23T14:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.195727 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.195766 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.195777 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.195792 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.195837 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:00Z","lastTransitionTime":"2026-01-23T14:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.298678 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.298719 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.298731 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.298746 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.298758 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:00Z","lastTransitionTime":"2026-01-23T14:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.401089 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.401162 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.401183 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.401209 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.401231 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:00Z","lastTransitionTime":"2026-01-23T14:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.503579 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.503629 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.503642 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.503658 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.503669 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:00Z","lastTransitionTime":"2026-01-23T14:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.608663 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.608709 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.608717 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.608732 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.608741 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:00Z","lastTransitionTime":"2026-01-23T14:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.697300 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 19:58:10.276233665 +0000 UTC Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.712433 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.712496 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.712515 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.712539 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.712555 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:00Z","lastTransitionTime":"2026-01-23T14:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.712951 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:00 crc kubenswrapper[4775]: E0123 14:05:00.713124 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.815543 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.815619 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.815639 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.815666 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.815684 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:00Z","lastTransitionTime":"2026-01-23T14:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.918634 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.918684 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.918696 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.918714 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:00 crc kubenswrapper[4775]: I0123 14:05:00.918728 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:00Z","lastTransitionTime":"2026-01-23T14:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.022576 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.022657 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.022680 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.022707 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.022726 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:01Z","lastTransitionTime":"2026-01-23T14:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.125130 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.125168 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.125175 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.125189 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.125199 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:01Z","lastTransitionTime":"2026-01-23T14:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.227687 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.227732 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.227744 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.227761 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.227794 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:01Z","lastTransitionTime":"2026-01-23T14:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.330480 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.330555 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.330566 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.330581 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.330591 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:01Z","lastTransitionTime":"2026-01-23T14:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.433522 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.433568 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.433578 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.433595 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.433607 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:01Z","lastTransitionTime":"2026-01-23T14:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.537014 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.537104 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.537123 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.537164 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.537187 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:01Z","lastTransitionTime":"2026-01-23T14:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.640362 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.640402 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.640415 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.640441 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.640456 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:01Z","lastTransitionTime":"2026-01-23T14:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.698099 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 09:03:23.403345058 +0000 UTC Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.713527 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.713595 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:01 crc kubenswrapper[4775]: E0123 14:05:01.713715 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.713760 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:01 crc kubenswrapper[4775]: E0123 14:05:01.713993 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:01 crc kubenswrapper[4775]: E0123 14:05:01.714188 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.744190 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.744281 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.744305 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.744334 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.744355 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:01Z","lastTransitionTime":"2026-01-23T14:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.847556 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.847650 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.847676 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.847706 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.847726 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:01Z","lastTransitionTime":"2026-01-23T14:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.950994 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.951046 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.951067 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.951091 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:01 crc kubenswrapper[4775]: I0123 14:05:01.951111 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:01Z","lastTransitionTime":"2026-01-23T14:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.054371 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.054415 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.054432 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.054454 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.054467 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:02Z","lastTransitionTime":"2026-01-23T14:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.156848 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.156884 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.156935 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.156961 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.156977 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:02Z","lastTransitionTime":"2026-01-23T14:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.259516 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.259591 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.259610 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.259637 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.259655 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:02Z","lastTransitionTime":"2026-01-23T14:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.362724 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.362787 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.362885 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.362919 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.362940 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:02Z","lastTransitionTime":"2026-01-23T14:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.466440 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.466553 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.466630 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.466681 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.466707 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:02Z","lastTransitionTime":"2026-01-23T14:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.569943 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.570003 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.570021 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.570044 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.570060 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:02Z","lastTransitionTime":"2026-01-23T14:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.672511 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.672579 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.672603 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.672636 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.672663 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:02Z","lastTransitionTime":"2026-01-23T14:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.698326 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 09:19:53.78024853 +0000 UTC Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.713928 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:02 crc kubenswrapper[4775]: E0123 14:05:02.714075 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.714930 4775 scope.go:117] "RemoveContainer" containerID="e859953b87c3a3d0413118cd0c2f199cb6576dc3f9f136effb8ac6059d9d74d5" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.775327 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.775405 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.775422 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.775448 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.775465 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:02Z","lastTransitionTime":"2026-01-23T14:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.878946 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.879012 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.879028 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.879053 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.879098 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:02Z","lastTransitionTime":"2026-01-23T14:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.982175 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.982217 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.982229 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.982248 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:02 crc kubenswrapper[4775]: I0123 14:05:02.982261 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:02Z","lastTransitionTime":"2026-01-23T14:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.084195 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.084234 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.084247 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.084264 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.084275 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:03Z","lastTransitionTime":"2026-01-23T14:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.086848 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qrvs8_bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06/ovnkube-controller/1.log" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.089392 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerStarted","Data":"cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba"} Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.089745 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.102786 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-47lz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ed1a97-c97e-40d0-afdf-260c475dc83f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-47lz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.116011 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.132830 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.143876 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.156572 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.174853 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.186575 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.186617 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.186633 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.186653 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.186667 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:03Z","lastTransitionTime":"2026-01-23T14:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.189725 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.203496 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.215073 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9faab1b3-3f25-40a9-852f-64e14dd51f6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e86d8bd8f77572c3ed3ba515863b0d66b2654865e89c4b05bf47072c458b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f43da97bc3001c1066778d14029bd40271ef42849a6966caaf39da7174890aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z55mw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.230407 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.258259 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.271231 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.288153 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.288917 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.288969 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.288981 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.288999 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.289012 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:03Z","lastTransitionTime":"2026-01-23T14:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.310527 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.330970 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.347789 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.369352 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e859953b87c3a3d0413118cd0c2f199cb6576dc3f9f136effb8ac6059d9d74d5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"message\\\":\\\"nsole-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0123 14:04:49.017949 6230 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/package-server-manager-metrics\\\\\\\"}\\\\nI0123 14:04:49.017910 6230 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-controllers]} name:Service_openshift-machine-api/machine-api-controllers_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.167:8441: 10.217.4.167:8442: 10.217.4.167:8444:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {62af83f3-e0c8-4632-aaaa-17488566a9d8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 14:04:49.017979 6230 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to sh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.390768 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.390828 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.390837 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.390854 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.390865 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:03Z","lastTransitionTime":"2026-01-23T14:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.492788 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.492843 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.492855 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.492872 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.492883 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:03Z","lastTransitionTime":"2026-01-23T14:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.595133 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.595167 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.595175 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.595189 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.595199 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:03Z","lastTransitionTime":"2026-01-23T14:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.614619 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.614697 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.614731 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.614751 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.614772 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:03 crc kubenswrapper[4775]: E0123 14:05:03.614849 4775 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 14:05:03 crc kubenswrapper[4775]: E0123 14:05:03.614930 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:05:35.614891383 +0000 UTC m=+82.609720153 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:05:03 crc kubenswrapper[4775]: E0123 14:05:03.614978 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 14:05:03 crc kubenswrapper[4775]: E0123 14:05:03.614986 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 14:05:35.614971825 +0000 UTC m=+82.609800605 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 14:05:03 crc kubenswrapper[4775]: E0123 14:05:03.614999 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 14:05:03 crc kubenswrapper[4775]: E0123 14:05:03.615014 4775 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:05:03 crc kubenswrapper[4775]: E0123 14:05:03.615037 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 14:05:03 crc kubenswrapper[4775]: E0123 14:05:03.615081 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 14:05:03 crc kubenswrapper[4775]: E0123 14:05:03.615049 4775 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 14:05:03 crc kubenswrapper[4775]: E0123 14:05:03.615103 4775 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:05:03 crc kubenswrapper[4775]: E0123 14:05:03.615055 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 14:05:35.615038587 +0000 UTC m=+82.609867427 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:05:03 crc kubenswrapper[4775]: E0123 14:05:03.615151 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 14:05:35.615127389 +0000 UTC m=+82.609956249 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 14:05:03 crc kubenswrapper[4775]: E0123 14:05:03.615171 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 14:05:35.61515938 +0000 UTC m=+82.609988280 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.698550 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 06:44:10.699649447 +0000 UTC Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.698638 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.698698 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.698720 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.698750 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.698771 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:03Z","lastTransitionTime":"2026-01-23T14:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.713651 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:03 crc kubenswrapper[4775]: E0123 14:05:03.713886 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.714473 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:03 crc kubenswrapper[4775]: E0123 14:05:03.714623 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.714914 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:03 crc kubenswrapper[4775]: E0123 14:05:03.715057 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.739784 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.761525 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.777161 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9faab1b3-3f25-40a9-852f-64e14dd51f6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e86d8bd8f77572c3ed3ba515863b0d66b2654865e89c4b05bf47072c458b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f43da97bc3001c1066778d14029bd40271ef42849a6966caaf39da7174890aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z55mw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.797605 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.801860 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.801902 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.801917 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.801935 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.801947 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:03Z","lastTransitionTime":"2026-01-23T14:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.840626 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.853514 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.866402 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.885014 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.900432 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.904929 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.905017 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.905045 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.905072 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.905088 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:03Z","lastTransitionTime":"2026-01-23T14:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.914907 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.934690 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e859953b87c3a3d0413118cd0c2f199cb6576dc3f9f136effb8ac6059d9d74d5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"message\\\":\\\"nsole-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0123 14:04:49.017949 6230 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/package-server-manager-metrics\\\\\\\"}\\\\nI0123 14:04:49.017910 6230 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-controllers]} name:Service_openshift-machine-api/machine-api-controllers_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.167:8441: 10.217.4.167:8442: 10.217.4.167:8444:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {62af83f3-e0c8-4632-aaaa-17488566a9d8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 14:04:49.017979 6230 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to sh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.950205 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.965792 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.975994 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:03 crc kubenswrapper[4775]: I0123 14:05:03.991098 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:03Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.006981 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.007018 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.007027 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.007043 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.007053 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:04Z","lastTransitionTime":"2026-01-23T14:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.007885 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:04Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.019199 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-47lz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ed1a97-c97e-40d0-afdf-260c475dc83f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-47lz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:04Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.095548 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qrvs8_bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06/ovnkube-controller/2.log" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.096591 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qrvs8_bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06/ovnkube-controller/1.log" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.100089 4775 generic.go:334] "Generic (PLEG): container finished" podID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerID="cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba" exitCode=1 Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.100127 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerDied","Data":"cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba"} Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.100177 4775 scope.go:117] "RemoveContainer" containerID="e859953b87c3a3d0413118cd0c2f199cb6576dc3f9f136effb8ac6059d9d74d5" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.102891 4775 scope.go:117] "RemoveContainer" containerID="cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba" Jan 23 14:05:04 crc kubenswrapper[4775]: E0123 14:05:04.103325 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qrvs8_openshift-ovn-kubernetes(bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.110484 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.110543 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.110565 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.110597 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.110617 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:04Z","lastTransitionTime":"2026-01-23T14:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.128901 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:04Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.150439 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:04Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.167572 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:04Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.199203 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e859953b87c3a3d0413118cd0c2f199cb6576dc3f9f136effb8ac6059d9d74d5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"message\\\":\\\"nsole-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0123 14:04:49.017949 6230 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-operator-lifecycle-manager/package-server-manager-metrics\\\\\\\"}\\\\nI0123 14:04:49.017910 6230 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/machine-api-controllers]} name:Service_openshift-machine-api/machine-api-controllers_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.167:8441: 10.217.4.167:8442: 10.217.4.167:8444:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {62af83f3-e0c8-4632-aaaa-17488566a9d8}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 14:04:49.017979 6230 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to sh\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:05:03Z\\\",\\\"message\\\":\\\"er_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0123 14:05:03.714446 6455 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0123 14:05:03.714448 6455 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 14:05:03.714393 6455 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 14:05:03.714525 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:04Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.212017 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:04Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.213469 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.213537 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.213603 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.213673 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.213774 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:04Z","lastTransitionTime":"2026-01-23T14:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.229078 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:04Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.247605 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:04Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.257565 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-47lz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ed1a97-c97e-40d0-afdf-260c475dc83f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-47lz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:04Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.268323 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:04Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.281512 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:04Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.293799 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:04Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.305851 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:04Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.317866 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.317910 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.317922 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.317938 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.317952 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:04Z","lastTransitionTime":"2026-01-23T14:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.320328 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9faab1b3-3f25-40a9-852f-64e14dd51f6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e86d8bd8f77572c3ed3ba515863b0d66b2654865e89c4b05bf47072c458b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f43da97bc3001c1066778d14029bd40271ef42849a6966caaf39da7174890aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z55mw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:04Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.334081 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:04Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.352504 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:04Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.364418 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:04Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.374412 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:04Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.420874 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.420911 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.420922 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.420938 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.420949 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:04Z","lastTransitionTime":"2026-01-23T14:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.523835 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.523903 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.523921 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.523952 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.523975 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:04Z","lastTransitionTime":"2026-01-23T14:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.627850 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.627925 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.627949 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.627976 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.627995 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:04Z","lastTransitionTime":"2026-01-23T14:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.699097 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 21:53:46.858417138 +0000 UTC Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.713703 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:04 crc kubenswrapper[4775]: E0123 14:05:04.714022 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.730999 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.731056 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.731072 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.731093 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.731108 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:04Z","lastTransitionTime":"2026-01-23T14:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.833818 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.833858 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.833870 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.833885 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.833897 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:04Z","lastTransitionTime":"2026-01-23T14:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.937014 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.937098 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.937123 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.937171 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:04 crc kubenswrapper[4775]: I0123 14:05:04.937196 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:04Z","lastTransitionTime":"2026-01-23T14:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.040469 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.040527 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.040547 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.040573 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.040591 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:05Z","lastTransitionTime":"2026-01-23T14:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.105281 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qrvs8_bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06/ovnkube-controller/2.log" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.110045 4775 scope.go:117] "RemoveContainer" containerID="cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba" Jan 23 14:05:05 crc kubenswrapper[4775]: E0123 14:05:05.110278 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qrvs8_openshift-ovn-kubernetes(bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.130189 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:05Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.143772 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.143829 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.143840 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.143857 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.143869 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:05Z","lastTransitionTime":"2026-01-23T14:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.149694 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:05Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.163597 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:05Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.186131 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:05:03Z\\\",\\\"message\\\":\\\"er_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0123 14:05:03.714446 6455 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0123 14:05:03.714448 6455 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 14:05:03.714393 6455 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 14:05:03.714525 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:05:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qrvs8_openshift-ovn-kubernetes(bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:05Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.201738 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:05Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.218170 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:05Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.229681 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:05Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.243040 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:05Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.247266 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.247320 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.247337 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.247358 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.247373 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:05Z","lastTransitionTime":"2026-01-23T14:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.263216 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:05Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.279668 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-47lz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ed1a97-c97e-40d0-afdf-260c475dc83f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-47lz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:05Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.298924 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:05Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.316553 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:05Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.333029 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9faab1b3-3f25-40a9-852f-64e14dd51f6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e86d8bd8f77572c3ed3ba515863b0d66b2654865e89c4b05bf47072c458b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f43da97bc3001c1066778d14029bd40271ef42849a6966caaf39da7174890aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z55mw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:05Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.350009 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:05Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.350334 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.350364 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.350397 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.350413 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.350424 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:05Z","lastTransitionTime":"2026-01-23T14:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.380462 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:05Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.397457 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:05Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.412505 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:05Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.452952 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.452998 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.453011 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.453028 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.453039 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:05Z","lastTransitionTime":"2026-01-23T14:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.555160 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.555213 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.555248 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.555278 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.555297 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:05Z","lastTransitionTime":"2026-01-23T14:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.658226 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.658536 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.658706 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.658900 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.659065 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:05Z","lastTransitionTime":"2026-01-23T14:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.700229 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 22:40:30.769569416 +0000 UTC Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.705129 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.705243 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.705270 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.705308 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.705332 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:05Z","lastTransitionTime":"2026-01-23T14:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.714023 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.714030 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:05 crc kubenswrapper[4775]: E0123 14:05:05.714204 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:05 crc kubenswrapper[4775]: E0123 14:05:05.714348 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.714042 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:05 crc kubenswrapper[4775]: E0123 14:05:05.714563 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:05 crc kubenswrapper[4775]: E0123 14:05:05.729150 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:05Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.734391 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.734475 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.734504 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.734536 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.734556 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:05Z","lastTransitionTime":"2026-01-23T14:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:05 crc kubenswrapper[4775]: E0123 14:05:05.753899 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:05Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.758930 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.758987 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.759020 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.759046 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.759065 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:05Z","lastTransitionTime":"2026-01-23T14:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:05 crc kubenswrapper[4775]: E0123 14:05:05.779514 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:05Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.784447 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.784496 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.784513 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.784538 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.784557 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:05Z","lastTransitionTime":"2026-01-23T14:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:05 crc kubenswrapper[4775]: E0123 14:05:05.804743 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:05Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.810186 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.810236 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.810253 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.810280 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.810298 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:05Z","lastTransitionTime":"2026-01-23T14:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:05 crc kubenswrapper[4775]: E0123 14:05:05.830701 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:05Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:05 crc kubenswrapper[4775]: E0123 14:05:05.830968 4775 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.834120 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.834202 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.834222 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.834253 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.834275 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:05Z","lastTransitionTime":"2026-01-23T14:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.937862 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.937930 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.937943 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.937968 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:05 crc kubenswrapper[4775]: I0123 14:05:05.937983 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:05Z","lastTransitionTime":"2026-01-23T14:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.040206 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.040253 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.040264 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.040279 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.040291 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:06Z","lastTransitionTime":"2026-01-23T14:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.142876 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.142929 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.142948 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.142969 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.142983 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:06Z","lastTransitionTime":"2026-01-23T14:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.246742 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.246837 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.246857 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.246881 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.246900 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:06Z","lastTransitionTime":"2026-01-23T14:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.350073 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.350163 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.350180 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.350206 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.350223 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:06Z","lastTransitionTime":"2026-01-23T14:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.452926 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.453012 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.453031 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.453057 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.453075 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:06Z","lastTransitionTime":"2026-01-23T14:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.544422 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs\") pod \"network-metrics-daemon-47lz2\" (UID: \"63ed1a97-c97e-40d0-afdf-260c475dc83f\") " pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:06 crc kubenswrapper[4775]: E0123 14:05:06.544603 4775 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 14:05:06 crc kubenswrapper[4775]: E0123 14:05:06.544661 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs podName:63ed1a97-c97e-40d0-afdf-260c475dc83f nodeName:}" failed. No retries permitted until 2026-01-23 14:05:22.544642716 +0000 UTC m=+69.539471466 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs") pod "network-metrics-daemon-47lz2" (UID: "63ed1a97-c97e-40d0-afdf-260c475dc83f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.555527 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.555650 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.555671 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.555698 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.555716 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:06Z","lastTransitionTime":"2026-01-23T14:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.658788 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.659210 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.659416 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.659619 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.659906 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:06Z","lastTransitionTime":"2026-01-23T14:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.701795 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 17:31:27.970720928 +0000 UTC Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.713176 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:06 crc kubenswrapper[4775]: E0123 14:05:06.713576 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.762981 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.763060 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.763083 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.763114 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.763136 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:06Z","lastTransitionTime":"2026-01-23T14:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.866259 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.866297 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.866309 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.866323 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.866332 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:06Z","lastTransitionTime":"2026-01-23T14:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.969084 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.969164 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.969186 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.969214 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:06 crc kubenswrapper[4775]: I0123 14:05:06.969235 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:06Z","lastTransitionTime":"2026-01-23T14:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.072136 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.072222 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.072241 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.072273 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.072296 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:07Z","lastTransitionTime":"2026-01-23T14:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.176519 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.176608 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.176621 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.176646 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.176661 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:07Z","lastTransitionTime":"2026-01-23T14:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.300703 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.300756 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.300770 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.300794 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.300832 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:07Z","lastTransitionTime":"2026-01-23T14:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.403853 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.403977 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.404009 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.404099 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.404122 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:07Z","lastTransitionTime":"2026-01-23T14:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.507636 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.508057 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.508069 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.508086 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.508098 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:07Z","lastTransitionTime":"2026-01-23T14:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.612649 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.613120 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.613385 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.613548 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.613719 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:07Z","lastTransitionTime":"2026-01-23T14:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.703010 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 03:50:00.775772596 +0000 UTC Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.713451 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.713493 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.713539 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:07 crc kubenswrapper[4775]: E0123 14:05:07.713632 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:07 crc kubenswrapper[4775]: E0123 14:05:07.713896 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:07 crc kubenswrapper[4775]: E0123 14:05:07.714043 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.716560 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.716622 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.716640 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.716665 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.716682 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:07Z","lastTransitionTime":"2026-01-23T14:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.819667 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.819729 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.819745 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.819770 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.819787 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:07Z","lastTransitionTime":"2026-01-23T14:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.923219 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.923291 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.923315 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.923345 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:07 crc kubenswrapper[4775]: I0123 14:05:07.923366 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:07Z","lastTransitionTime":"2026-01-23T14:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.026100 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.026151 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.026167 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.026185 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.026201 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:08Z","lastTransitionTime":"2026-01-23T14:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.127793 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.127856 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.127865 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.127881 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.127890 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:08Z","lastTransitionTime":"2026-01-23T14:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.230705 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.230755 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.230766 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.230782 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.230794 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:08Z","lastTransitionTime":"2026-01-23T14:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.333604 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.333644 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.333657 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.333675 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.333688 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:08Z","lastTransitionTime":"2026-01-23T14:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.436917 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.436990 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.437013 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.437042 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.437065 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:08Z","lastTransitionTime":"2026-01-23T14:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.540255 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.540315 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.540339 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.540368 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.540390 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:08Z","lastTransitionTime":"2026-01-23T14:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.643928 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.644011 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.644033 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.644061 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.644082 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:08Z","lastTransitionTime":"2026-01-23T14:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.703674 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 20:28:20.857054239 +0000 UTC Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.713392 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:08 crc kubenswrapper[4775]: E0123 14:05:08.713571 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.746687 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.746738 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.747971 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.748180 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.748201 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:08Z","lastTransitionTime":"2026-01-23T14:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.851879 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.851920 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.851929 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.851946 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.851956 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:08Z","lastTransitionTime":"2026-01-23T14:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.954241 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.954312 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.954332 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.954356 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:08 crc kubenswrapper[4775]: I0123 14:05:08.954374 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:08Z","lastTransitionTime":"2026-01-23T14:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.056739 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.056772 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.056781 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.056816 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.056827 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:09Z","lastTransitionTime":"2026-01-23T14:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.159199 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.159242 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.159252 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.159267 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.159276 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:09Z","lastTransitionTime":"2026-01-23T14:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.261592 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.261632 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.261641 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.261656 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.261667 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:09Z","lastTransitionTime":"2026-01-23T14:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.363960 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.363998 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.364009 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.364026 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.364039 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:09Z","lastTransitionTime":"2026-01-23T14:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.430487 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.445131 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.452466 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:09Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.466672 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.466741 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.466764 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.466882 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.466909 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:09Z","lastTransitionTime":"2026-01-23T14:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.470310 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:09Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.490106 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9faab1b3-3f25-40a9-852f-64e14dd51f6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e86d8bd8f77572c3ed3ba515863b0d66b2654865e89c4b05bf47072c458b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f43da97bc3001c1066778d14029bd40271ef42849a6966caaf39da7174890aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z55mw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:09Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.510370 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:09Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.542438 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:09Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.556767 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:09Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.569858 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.569912 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.569929 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.569953 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.569970 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:09Z","lastTransitionTime":"2026-01-23T14:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.575180 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:09Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.597717 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:09Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.618482 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:09Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.635266 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:09Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.661197 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:05:03Z\\\",\\\"message\\\":\\\"er_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0123 14:05:03.714446 6455 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0123 14:05:03.714448 6455 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 14:05:03.714393 6455 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 14:05:03.714525 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:05:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qrvs8_openshift-ovn-kubernetes(bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:09Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.673042 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.673123 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.673148 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.673180 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.673197 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:09Z","lastTransitionTime":"2026-01-23T14:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.680351 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:09Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.703668 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:09Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.704108 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 02:17:06.930551676 +0000 UTC Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.713169 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.713169 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:09 crc kubenswrapper[4775]: E0123 14:05:09.713343 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:09 crc kubenswrapper[4775]: E0123 14:05:09.713416 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.713413 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:09 crc kubenswrapper[4775]: E0123 14:05:09.713497 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.721022 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-47lz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ed1a97-c97e-40d0-afdf-260c475dc83f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-47lz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:09Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.741778 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:09Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.764086 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:09Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.775891 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.775935 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.775946 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.775980 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.775994 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:09Z","lastTransitionTime":"2026-01-23T14:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.779933 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:09Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.878760 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.878867 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.878887 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.878913 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.878931 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:09Z","lastTransitionTime":"2026-01-23T14:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.981718 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.981757 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.981770 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.981822 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:09 crc kubenswrapper[4775]: I0123 14:05:09.981833 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:09Z","lastTransitionTime":"2026-01-23T14:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.084259 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.084306 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.084319 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.084337 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.084349 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:10Z","lastTransitionTime":"2026-01-23T14:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.186919 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.186978 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.186998 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.187025 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.187043 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:10Z","lastTransitionTime":"2026-01-23T14:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.290288 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.290329 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.290338 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.290352 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.290361 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:10Z","lastTransitionTime":"2026-01-23T14:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.392410 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.392448 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.392457 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.392473 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.392484 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:10Z","lastTransitionTime":"2026-01-23T14:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.495209 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.495276 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.495294 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.495320 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.495340 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:10Z","lastTransitionTime":"2026-01-23T14:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.598234 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.598299 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.598319 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.598367 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.598400 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:10Z","lastTransitionTime":"2026-01-23T14:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.701384 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.701456 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.701472 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.701500 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.701518 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:10Z","lastTransitionTime":"2026-01-23T14:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.704662 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 16:40:00.240193891 +0000 UTC Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.713115 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:10 crc kubenswrapper[4775]: E0123 14:05:10.713353 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.804616 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.804646 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.804655 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.804668 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.804678 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:10Z","lastTransitionTime":"2026-01-23T14:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.908675 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.908747 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.908766 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.908832 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:10 crc kubenswrapper[4775]: I0123 14:05:10.908861 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:10Z","lastTransitionTime":"2026-01-23T14:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.011438 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.011474 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.011483 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.011498 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.011508 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:11Z","lastTransitionTime":"2026-01-23T14:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.114094 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.114134 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.114147 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.114163 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.114172 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:11Z","lastTransitionTime":"2026-01-23T14:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.218007 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.218079 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.218096 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.218124 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.218143 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:11Z","lastTransitionTime":"2026-01-23T14:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.321739 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.321819 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.321832 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.321854 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.321871 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:11Z","lastTransitionTime":"2026-01-23T14:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.425026 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.425078 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.425091 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.425115 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.425132 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:11Z","lastTransitionTime":"2026-01-23T14:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.527790 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.527900 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.527995 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.528019 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.528036 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:11Z","lastTransitionTime":"2026-01-23T14:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.631221 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.631369 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.631396 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.631426 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.631452 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:11Z","lastTransitionTime":"2026-01-23T14:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.705338 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 14:39:59.838675144 +0000 UTC Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.713923 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.714016 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.714021 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:11 crc kubenswrapper[4775]: E0123 14:05:11.714142 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:11 crc kubenswrapper[4775]: E0123 14:05:11.714286 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:11 crc kubenswrapper[4775]: E0123 14:05:11.714439 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.734562 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.734616 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.734631 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.734651 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.734666 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:11Z","lastTransitionTime":"2026-01-23T14:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.838707 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.838781 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.838831 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.838889 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.838929 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:11Z","lastTransitionTime":"2026-01-23T14:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.942273 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.942324 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.942344 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.942367 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:11 crc kubenswrapper[4775]: I0123 14:05:11.942385 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:11Z","lastTransitionTime":"2026-01-23T14:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.044979 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.045059 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.045082 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.045112 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.045134 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:12Z","lastTransitionTime":"2026-01-23T14:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.147086 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.147124 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.147135 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.147150 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.147160 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:12Z","lastTransitionTime":"2026-01-23T14:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.249843 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.249887 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.249897 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.249912 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.249922 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:12Z","lastTransitionTime":"2026-01-23T14:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.353465 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.353519 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.353536 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.353556 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.353571 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:12Z","lastTransitionTime":"2026-01-23T14:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.456319 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.456355 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.456368 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.456384 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.456396 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:12Z","lastTransitionTime":"2026-01-23T14:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.559198 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.559245 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.559259 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.559277 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.559288 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:12Z","lastTransitionTime":"2026-01-23T14:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.661930 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.661962 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.661970 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.661987 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.661998 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:12Z","lastTransitionTime":"2026-01-23T14:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.705472 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 16:08:06.077152554 +0000 UTC Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.713423 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:12 crc kubenswrapper[4775]: E0123 14:05:12.713648 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.764371 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.764420 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.764436 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.764457 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.764471 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:12Z","lastTransitionTime":"2026-01-23T14:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.867424 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.867496 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.867518 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.867550 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.867572 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:12Z","lastTransitionTime":"2026-01-23T14:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.970990 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.971047 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.971073 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.971099 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:12 crc kubenswrapper[4775]: I0123 14:05:12.971116 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:12Z","lastTransitionTime":"2026-01-23T14:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.073993 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.074022 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.074031 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.074068 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.074081 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:13Z","lastTransitionTime":"2026-01-23T14:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.177052 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.177104 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.177115 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.177129 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.177138 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:13Z","lastTransitionTime":"2026-01-23T14:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.279563 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.279624 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.279637 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.279656 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.279668 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:13Z","lastTransitionTime":"2026-01-23T14:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.382377 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.382425 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.382438 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.382456 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.382469 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:13Z","lastTransitionTime":"2026-01-23T14:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.486240 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.486294 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.486320 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.486353 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.486374 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:13Z","lastTransitionTime":"2026-01-23T14:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.589139 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.589228 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.589249 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.589283 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.589302 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:13Z","lastTransitionTime":"2026-01-23T14:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.692858 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.692958 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.692979 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.693013 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.693035 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:13Z","lastTransitionTime":"2026-01-23T14:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.706212 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 21:21:22.425653057 +0000 UTC Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.713956 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.714077 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:13 crc kubenswrapper[4775]: E0123 14:05:13.714159 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.714088 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:13 crc kubenswrapper[4775]: E0123 14:05:13.714325 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:13 crc kubenswrapper[4775]: E0123 14:05:13.714461 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.733511 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:13Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.750445 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:13Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.763143 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:13Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.780278 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:13Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.799315 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.799743 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.799996 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.800190 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.800337 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:13Z","lastTransitionTime":"2026-01-23T14:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.800021 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:13Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.814704 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-47lz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ed1a97-c97e-40d0-afdf-260c475dc83f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-47lz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:13Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.830898 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04f5b2ad-c277-4ce9-8a8e-1ae658a6820c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3301338273f633b6c32caed6b35db93841743e57f219115ae7c32e16fe4683f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41b0811b85f5245c0352225af50738ebaa72c1e52a2940ee42f5bc99218313ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3e880aa503bbce5a53073f7f735d1defcde092982f39958cd58020b2139b7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cab8f4130435939b220e9c48430b269cfd8f87485157504a5a29f581ff33468c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cab8f4130435939b220e9c48430b269cfd8f87485157504a5a29f581ff33468c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:13Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.845304 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9faab1b3-3f25-40a9-852f-64e14dd51f6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e86d8bd8f77572c3ed3ba515863b0d66b2654865e89c4b05bf47072c458b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f43da97bc3001c1066778d14029bd40271ef42849a6966caaf39da7174890aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z55mw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:13Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.860494 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:13Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.876733 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:13Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.890712 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:13Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.902343 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:13Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.905025 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.905089 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.905111 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.905140 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.905160 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:13Z","lastTransitionTime":"2026-01-23T14:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.919155 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:13Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.943482 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:13Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.957873 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:13Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.976629 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:05:03Z\\\",\\\"message\\\":\\\"er_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0123 14:05:03.714446 6455 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0123 14:05:03.714448 6455 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 14:05:03.714393 6455 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 14:05:03.714525 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:05:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qrvs8_openshift-ovn-kubernetes(bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:13Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:13 crc kubenswrapper[4775]: I0123 14:05:13.990517 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:13Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.004830 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:14Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.007766 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.007789 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.007810 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.007826 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.007835 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:14Z","lastTransitionTime":"2026-01-23T14:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.111084 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.111376 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.111446 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.111509 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.111569 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:14Z","lastTransitionTime":"2026-01-23T14:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.213926 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.213966 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.213976 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.213992 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.214001 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:14Z","lastTransitionTime":"2026-01-23T14:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.316277 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.316348 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.316360 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.316385 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.316401 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:14Z","lastTransitionTime":"2026-01-23T14:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.418880 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.418928 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.418935 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.418948 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.418957 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:14Z","lastTransitionTime":"2026-01-23T14:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.522220 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.522283 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.522306 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.522336 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.522357 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:14Z","lastTransitionTime":"2026-01-23T14:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.625479 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.625546 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.625568 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.625634 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.625658 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:14Z","lastTransitionTime":"2026-01-23T14:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.707140 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 04:05:05.658986093 +0000 UTC Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.713567 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:14 crc kubenswrapper[4775]: E0123 14:05:14.713784 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.728935 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.728970 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.728982 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.728999 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.729011 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:14Z","lastTransitionTime":"2026-01-23T14:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.832846 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.832907 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.832925 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.832949 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.832967 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:14Z","lastTransitionTime":"2026-01-23T14:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.934991 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.935034 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.935049 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.935070 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:14 crc kubenswrapper[4775]: I0123 14:05:14.935087 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:14Z","lastTransitionTime":"2026-01-23T14:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.038209 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.038298 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.038323 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.038353 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.038376 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:15Z","lastTransitionTime":"2026-01-23T14:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.141494 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.141555 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.141573 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.141597 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.141614 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:15Z","lastTransitionTime":"2026-01-23T14:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.245160 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.245210 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.245219 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.245233 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.245242 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:15Z","lastTransitionTime":"2026-01-23T14:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.347438 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.347472 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.347483 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.347500 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.347511 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:15Z","lastTransitionTime":"2026-01-23T14:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.450083 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.450469 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.450483 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.450503 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.450517 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:15Z","lastTransitionTime":"2026-01-23T14:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.553739 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.553793 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.553837 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.553860 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.553876 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:15Z","lastTransitionTime":"2026-01-23T14:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.657170 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.657221 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.657233 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.657707 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.657737 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:15Z","lastTransitionTime":"2026-01-23T14:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.707608 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 12:02:49.414223525 +0000 UTC Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.712946 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.712980 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:15 crc kubenswrapper[4775]: E0123 14:05:15.713133 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.713153 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:15 crc kubenswrapper[4775]: E0123 14:05:15.713297 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:15 crc kubenswrapper[4775]: E0123 14:05:15.713434 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.760129 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.760172 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.760184 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.760198 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.760210 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:15Z","lastTransitionTime":"2026-01-23T14:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.862519 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.862564 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.862576 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.862592 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.862603 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:15Z","lastTransitionTime":"2026-01-23T14:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.965717 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.965764 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.965788 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.965864 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:15 crc kubenswrapper[4775]: I0123 14:05:15.965891 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:15Z","lastTransitionTime":"2026-01-23T14:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.068262 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.068319 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.068328 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.068341 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.068350 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:16Z","lastTransitionTime":"2026-01-23T14:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.124476 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.124531 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.124544 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.124564 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.124576 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:16Z","lastTransitionTime":"2026-01-23T14:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:16 crc kubenswrapper[4775]: E0123 14:05:16.142075 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:16Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.147260 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.147330 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.147355 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.147384 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.147406 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:16Z","lastTransitionTime":"2026-01-23T14:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:16 crc kubenswrapper[4775]: E0123 14:05:16.163022 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:16Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.166947 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.167003 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.167021 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.167042 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.167055 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:16Z","lastTransitionTime":"2026-01-23T14:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:16 crc kubenswrapper[4775]: E0123 14:05:16.181285 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:16Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.184796 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.184880 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.184897 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.184921 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.184938 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:16Z","lastTransitionTime":"2026-01-23T14:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:16 crc kubenswrapper[4775]: E0123 14:05:16.201422 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:16Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.205969 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.206033 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.206052 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.206080 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.206099 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:16Z","lastTransitionTime":"2026-01-23T14:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:16 crc kubenswrapper[4775]: E0123 14:05:16.224738 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:16Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:16 crc kubenswrapper[4775]: E0123 14:05:16.224954 4775 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.226921 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.226956 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.226967 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.226984 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.226996 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:16Z","lastTransitionTime":"2026-01-23T14:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.330220 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.330254 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.330266 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.330282 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.330294 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:16Z","lastTransitionTime":"2026-01-23T14:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.434760 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.434878 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.434896 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.434915 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.434935 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:16Z","lastTransitionTime":"2026-01-23T14:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.538915 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.538960 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.538976 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.538999 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.539015 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:16Z","lastTransitionTime":"2026-01-23T14:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.642645 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.642758 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.642779 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.642846 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.642866 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:16Z","lastTransitionTime":"2026-01-23T14:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.708527 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 23:34:43.591372412 +0000 UTC Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.713900 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:16 crc kubenswrapper[4775]: E0123 14:05:16.714360 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.714522 4775 scope.go:117] "RemoveContainer" containerID="cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba" Jan 23 14:05:16 crc kubenswrapper[4775]: E0123 14:05:16.714677 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qrvs8_openshift-ovn-kubernetes(bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.747524 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.747675 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.747695 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.747756 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.747776 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:16Z","lastTransitionTime":"2026-01-23T14:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.850898 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.851029 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.851311 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.851338 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.851657 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:16Z","lastTransitionTime":"2026-01-23T14:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.954180 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.954215 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.954222 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.954237 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:16 crc kubenswrapper[4775]: I0123 14:05:16.954247 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:16Z","lastTransitionTime":"2026-01-23T14:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.056850 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.056915 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.056934 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.056956 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.056974 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:17Z","lastTransitionTime":"2026-01-23T14:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.160398 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.160469 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.160487 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.160510 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.160527 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:17Z","lastTransitionTime":"2026-01-23T14:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.263644 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.263701 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.263716 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.263745 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.263763 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:17Z","lastTransitionTime":"2026-01-23T14:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.366760 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.366796 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.366837 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.366856 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.366864 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:17Z","lastTransitionTime":"2026-01-23T14:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.469729 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.469838 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.469863 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.469893 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.469915 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:17Z","lastTransitionTime":"2026-01-23T14:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.573025 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.573066 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.573080 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.573097 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.573109 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:17Z","lastTransitionTime":"2026-01-23T14:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.675551 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.675589 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.675600 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.675616 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.675629 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:17Z","lastTransitionTime":"2026-01-23T14:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.709604 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 21:10:10.784158573 +0000 UTC Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.712955 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.713010 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:17 crc kubenswrapper[4775]: E0123 14:05:17.713075 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.712956 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:17 crc kubenswrapper[4775]: E0123 14:05:17.713204 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:17 crc kubenswrapper[4775]: E0123 14:05:17.713285 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.778169 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.778213 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.778224 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.778242 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.778252 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:17Z","lastTransitionTime":"2026-01-23T14:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.880990 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.881057 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.881075 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.881100 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.881118 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:17Z","lastTransitionTime":"2026-01-23T14:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.983226 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.983265 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.983276 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.983294 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:17 crc kubenswrapper[4775]: I0123 14:05:17.983306 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:17Z","lastTransitionTime":"2026-01-23T14:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.085833 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.085881 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.085897 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.085921 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.085937 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:18Z","lastTransitionTime":"2026-01-23T14:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.188376 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.188448 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.188496 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.188521 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.188537 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:18Z","lastTransitionTime":"2026-01-23T14:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.292198 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.292266 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.292283 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.292305 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.292325 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:18Z","lastTransitionTime":"2026-01-23T14:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.395233 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.395275 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.395285 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.395302 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.395316 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:18Z","lastTransitionTime":"2026-01-23T14:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.497569 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.497603 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.497614 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.497630 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.497640 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:18Z","lastTransitionTime":"2026-01-23T14:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.601353 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.601408 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.601424 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.601450 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.601468 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:18Z","lastTransitionTime":"2026-01-23T14:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.705180 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.705234 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.705246 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.705265 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.705278 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:18Z","lastTransitionTime":"2026-01-23T14:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.710476 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 19:36:53.814828794 +0000 UTC Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.713718 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:18 crc kubenswrapper[4775]: E0123 14:05:18.713889 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.808694 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.808723 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.808736 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.808753 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.808771 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:18Z","lastTransitionTime":"2026-01-23T14:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.911627 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.911675 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.911687 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.911704 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:18 crc kubenswrapper[4775]: I0123 14:05:18.911715 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:18Z","lastTransitionTime":"2026-01-23T14:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.015270 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.015327 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.015343 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.015368 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.015384 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:19Z","lastTransitionTime":"2026-01-23T14:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.118102 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.118194 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.118216 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.118246 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.118269 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:19Z","lastTransitionTime":"2026-01-23T14:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.222564 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.222605 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.222620 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.222643 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.222659 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:19Z","lastTransitionTime":"2026-01-23T14:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.325874 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.325964 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.325987 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.326023 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.326059 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:19Z","lastTransitionTime":"2026-01-23T14:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.428986 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.429087 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.429173 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.429253 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.429276 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:19Z","lastTransitionTime":"2026-01-23T14:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.532246 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.532280 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.532290 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.532306 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.532356 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:19Z","lastTransitionTime":"2026-01-23T14:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.635567 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.635647 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.635668 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.635692 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.635710 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:19Z","lastTransitionTime":"2026-01-23T14:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.711275 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 08:21:14.232977516 +0000 UTC Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.713497 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:19 crc kubenswrapper[4775]: E0123 14:05:19.713700 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.714150 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:19 crc kubenswrapper[4775]: E0123 14:05:19.714261 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.714494 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:19 crc kubenswrapper[4775]: E0123 14:05:19.714598 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.738284 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.738343 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.738359 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.738379 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.738393 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:19Z","lastTransitionTime":"2026-01-23T14:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.841527 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.841580 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.841600 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.841625 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.841642 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:19Z","lastTransitionTime":"2026-01-23T14:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.944865 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.944950 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.944973 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.945005 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:19 crc kubenswrapper[4775]: I0123 14:05:19.945026 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:19Z","lastTransitionTime":"2026-01-23T14:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.048205 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.048239 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.048249 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.048265 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.048275 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:20Z","lastTransitionTime":"2026-01-23T14:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.150180 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.150215 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.150224 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.150237 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.150246 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:20Z","lastTransitionTime":"2026-01-23T14:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.252910 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.252948 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.252959 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.252976 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.252987 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:20Z","lastTransitionTime":"2026-01-23T14:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.354950 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.355009 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.355032 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.355061 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.355080 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:20Z","lastTransitionTime":"2026-01-23T14:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.457568 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.457628 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.457645 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.457667 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.457684 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:20Z","lastTransitionTime":"2026-01-23T14:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.560553 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.560609 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.560628 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.560653 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.560678 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:20Z","lastTransitionTime":"2026-01-23T14:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.663157 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.663191 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.663210 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.663227 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.663237 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:20Z","lastTransitionTime":"2026-01-23T14:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.711945 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 14:08:29.127765757 +0000 UTC Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.713299 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:20 crc kubenswrapper[4775]: E0123 14:05:20.713486 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.765467 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.765512 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.765523 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.765541 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.765553 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:20Z","lastTransitionTime":"2026-01-23T14:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.869500 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.869597 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.869617 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.869639 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.869654 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:20Z","lastTransitionTime":"2026-01-23T14:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.971794 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.971838 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.971848 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.971862 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:20 crc kubenswrapper[4775]: I0123 14:05:20.971874 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:20Z","lastTransitionTime":"2026-01-23T14:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.074486 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.074538 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.074550 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.074567 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.074578 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:21Z","lastTransitionTime":"2026-01-23T14:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.176850 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.176889 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.176901 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.176916 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.176927 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:21Z","lastTransitionTime":"2026-01-23T14:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.279347 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.279396 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.279410 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.279427 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.279437 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:21Z","lastTransitionTime":"2026-01-23T14:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.381873 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.381922 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.381935 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.381951 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.381962 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:21Z","lastTransitionTime":"2026-01-23T14:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.484929 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.484989 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.485010 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.485042 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.485065 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:21Z","lastTransitionTime":"2026-01-23T14:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.588031 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.588068 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.588077 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.588091 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.588100 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:21Z","lastTransitionTime":"2026-01-23T14:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.690811 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.690847 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.690858 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.690873 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.690886 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:21Z","lastTransitionTime":"2026-01-23T14:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.712345 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 18:59:03.957042044 +0000 UTC Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.713552 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.713636 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:21 crc kubenswrapper[4775]: E0123 14:05:21.713679 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.713706 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:21 crc kubenswrapper[4775]: E0123 14:05:21.713776 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:21 crc kubenswrapper[4775]: E0123 14:05:21.713839 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.794794 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.794914 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.794938 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.794970 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.794995 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:21Z","lastTransitionTime":"2026-01-23T14:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.898745 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.898791 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.898827 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.898843 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:21 crc kubenswrapper[4775]: I0123 14:05:21.898852 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:21Z","lastTransitionTime":"2026-01-23T14:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.001635 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.001678 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.001714 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.001731 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.001741 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:22Z","lastTransitionTime":"2026-01-23T14:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.104573 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.104638 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.104662 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.104687 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.104704 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:22Z","lastTransitionTime":"2026-01-23T14:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.207098 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.207148 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.207160 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.207177 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.207191 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:22Z","lastTransitionTime":"2026-01-23T14:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.309573 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.309616 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.309628 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.309647 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.309659 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:22Z","lastTransitionTime":"2026-01-23T14:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.412175 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.412207 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.412217 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.412230 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.412238 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:22Z","lastTransitionTime":"2026-01-23T14:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.514032 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.514055 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.514063 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.514077 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.514086 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:22Z","lastTransitionTime":"2026-01-23T14:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.616278 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.616311 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.616320 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.616335 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.616345 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:22Z","lastTransitionTime":"2026-01-23T14:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.617992 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs\") pod \"network-metrics-daemon-47lz2\" (UID: \"63ed1a97-c97e-40d0-afdf-260c475dc83f\") " pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:22 crc kubenswrapper[4775]: E0123 14:05:22.618183 4775 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 14:05:22 crc kubenswrapper[4775]: E0123 14:05:22.618297 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs podName:63ed1a97-c97e-40d0-afdf-260c475dc83f nodeName:}" failed. No retries permitted until 2026-01-23 14:05:54.618275037 +0000 UTC m=+101.613103777 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs") pod "network-metrics-daemon-47lz2" (UID: "63ed1a97-c97e-40d0-afdf-260c475dc83f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.712894 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 07:59:09.090882197 +0000 UTC Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.712994 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:22 crc kubenswrapper[4775]: E0123 14:05:22.713118 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.719757 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.719815 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.719826 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.719841 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.719854 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:22Z","lastTransitionTime":"2026-01-23T14:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.821870 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.821901 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.821912 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.821927 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.821937 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:22Z","lastTransitionTime":"2026-01-23T14:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.935057 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.935114 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.935126 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.935145 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:22 crc kubenswrapper[4775]: I0123 14:05:22.935158 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:22Z","lastTransitionTime":"2026-01-23T14:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.037512 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.037541 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.037550 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.037564 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.037573 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:23Z","lastTransitionTime":"2026-01-23T14:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.139921 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.139966 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.139977 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.139994 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.140006 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:23Z","lastTransitionTime":"2026-01-23T14:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.182733 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hpxpf_ba4447c0-bada-49eb-b6b4-b25feff627a9/kube-multus/0.log" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.182776 4775 generic.go:334] "Generic (PLEG): container finished" podID="ba4447c0-bada-49eb-b6b4-b25feff627a9" containerID="d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec" exitCode=1 Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.182817 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hpxpf" event={"ID":"ba4447c0-bada-49eb-b6b4-b25feff627a9","Type":"ContainerDied","Data":"d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec"} Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.183185 4775 scope.go:117] "RemoveContainer" containerID="d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.197058 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.207780 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.221636 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.240653 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:05:03Z\\\",\\\"message\\\":\\\"er_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0123 14:05:03.714446 6455 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0123 14:05:03.714448 6455 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 14:05:03.714393 6455 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 14:05:03.714525 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:05:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qrvs8_openshift-ovn-kubernetes(bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.242029 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.242097 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.242106 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.242146 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.242158 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:23Z","lastTransitionTime":"2026-01-23T14:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.256051 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:05:23Z\\\",\\\"message\\\":\\\"2026-01-23T14:04:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_deec2807-d78d-4cb4-94e7-8d84a64fcbe4\\\\n2026-01-23T14:04:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_deec2807-d78d-4cb4-94e7-8d84a64fcbe4 to /host/opt/cni/bin/\\\\n2026-01-23T14:04:38Z [verbose] multus-daemon started\\\\n2026-01-23T14:04:38Z [verbose] Readiness Indicator file check\\\\n2026-01-23T14:05:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.268585 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.277254 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-47lz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ed1a97-c97e-40d0-afdf-260c475dc83f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-47lz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.286276 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04f5b2ad-c277-4ce9-8a8e-1ae658a6820c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3301338273f633b6c32caed6b35db93841743e57f219115ae7c32e16fe4683f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41b0811b85f5245c0352225af50738ebaa72c1e52a2940ee42f5bc99218313ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3e880aa503bbce5a53073f7f735d1defcde092982f39958cd58020b2139b7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cab8f4130435939b220e9c48430b269cfd8f87485157504a5a29f581ff33468c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cab8f4130435939b220e9c48430b269cfd8f87485157504a5a29f581ff33468c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.309192 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.334347 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.344751 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.344782 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.344793 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.344826 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.344838 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:23Z","lastTransitionTime":"2026-01-23T14:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.351781 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.366489 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.378699 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.387403 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9faab1b3-3f25-40a9-852f-64e14dd51f6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e86d8bd8f77572c3ed3ba515863b0d66b2654865e89c4b05bf47072c458b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f43da97bc3001c1066778d14029bd40271ef42849a6966caaf39da7174890aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z55mw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.397407 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.415317 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.430207 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.442023 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.447492 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.447541 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.447553 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.447569 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.447580 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:23Z","lastTransitionTime":"2026-01-23T14:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.550054 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.550097 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.550108 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.550125 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.550137 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:23Z","lastTransitionTime":"2026-01-23T14:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.652946 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.652987 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.653006 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.653024 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.653035 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:23Z","lastTransitionTime":"2026-01-23T14:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.713606 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 17:28:40.045438346 +0000 UTC Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.713746 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.713883 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:23 crc kubenswrapper[4775]: E0123 14:05:23.714104 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.714123 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:23 crc kubenswrapper[4775]: E0123 14:05:23.714233 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:23 crc kubenswrapper[4775]: E0123 14:05:23.714328 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.733621 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.752647 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.755390 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.755428 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.755441 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.755457 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.755470 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:23Z","lastTransitionTime":"2026-01-23T14:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.767273 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.791414 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:05:03Z\\\",\\\"message\\\":\\\"er_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0123 14:05:03.714446 6455 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0123 14:05:03.714448 6455 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 14:05:03.714393 6455 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 14:05:03.714525 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:05:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qrvs8_openshift-ovn-kubernetes(bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.805534 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.830630 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:05:23Z\\\",\\\"message\\\":\\\"2026-01-23T14:04:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_deec2807-d78d-4cb4-94e7-8d84a64fcbe4\\\\n2026-01-23T14:04:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_deec2807-d78d-4cb4-94e7-8d84a64fcbe4 to /host/opt/cni/bin/\\\\n2026-01-23T14:04:38Z [verbose] multus-daemon started\\\\n2026-01-23T14:04:38Z [verbose] Readiness Indicator file check\\\\n2026-01-23T14:05:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.849419 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.857628 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.857662 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.857672 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.857688 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.857698 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:23Z","lastTransitionTime":"2026-01-23T14:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.860573 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-47lz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ed1a97-c97e-40d0-afdf-260c475dc83f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-47lz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.874420 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04f5b2ad-c277-4ce9-8a8e-1ae658a6820c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3301338273f633b6c32caed6b35db93841743e57f219115ae7c32e16fe4683f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41b0811b85f5245c0352225af50738ebaa72c1e52a2940ee42f5bc99218313ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3e880aa503bbce5a53073f7f735d1defcde092982f39958cd58020b2139b7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cab8f4130435939b220e9c48430b269cfd8f87485157504a5a29f581ff33468c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cab8f4130435939b220e9c48430b269cfd8f87485157504a5a29f581ff33468c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.892031 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.907291 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.926007 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.940524 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.956442 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9faab1b3-3f25-40a9-852f-64e14dd51f6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e86d8bd8f77572c3ed3ba515863b0d66b2654865e89c4b05bf47072c458b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f43da97bc3001c1066778d14029bd40271ef42849a6966caaf39da7174890aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z55mw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.960139 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.960208 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.960227 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.960254 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.960273 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:23Z","lastTransitionTime":"2026-01-23T14:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:23 crc kubenswrapper[4775]: I0123 14:05:23.977563 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:23Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.009020 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:24Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.028509 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:24Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.039194 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:24Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.063255 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.063338 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.063360 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.063389 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.063414 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:24Z","lastTransitionTime":"2026-01-23T14:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.166643 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.166689 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.166701 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.166725 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.166737 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:24Z","lastTransitionTime":"2026-01-23T14:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.189041 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hpxpf_ba4447c0-bada-49eb-b6b4-b25feff627a9/kube-multus/0.log" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.189127 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hpxpf" event={"ID":"ba4447c0-bada-49eb-b6b4-b25feff627a9","Type":"ContainerStarted","Data":"8f14be984531a60487db2daba36d9cba7f2bbafa8b8d68889c261f3b2260f058"} Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.206986 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:24Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.221631 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:24Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.241697 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:05:03Z\\\",\\\"message\\\":\\\"er_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0123 14:05:03.714446 6455 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0123 14:05:03.714448 6455 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 14:05:03.714393 6455 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 14:05:03.714525 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:05:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qrvs8_openshift-ovn-kubernetes(bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:24Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.255474 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:24Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.268077 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:24Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.270313 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.270347 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.270356 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.270371 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.270383 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:24Z","lastTransitionTime":"2026-01-23T14:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.279148 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:24Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.288595 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:24Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.302267 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f14be984531a60487db2daba36d9cba7f2bbafa8b8d68889c261f3b2260f058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:05:23Z\\\",\\\"message\\\":\\\"2026-01-23T14:04:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_deec2807-d78d-4cb4-94e7-8d84a64fcbe4\\\\n2026-01-23T14:04:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_deec2807-d78d-4cb4-94e7-8d84a64fcbe4 to /host/opt/cni/bin/\\\\n2026-01-23T14:04:38Z [verbose] multus-daemon started\\\\n2026-01-23T14:04:38Z [verbose] Readiness Indicator file check\\\\n2026-01-23T14:05:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:24Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.320415 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:24Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.332056 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-47lz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ed1a97-c97e-40d0-afdf-260c475dc83f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-47lz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:24Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.342281 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04f5b2ad-c277-4ce9-8a8e-1ae658a6820c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3301338273f633b6c32caed6b35db93841743e57f219115ae7c32e16fe4683f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41b0811b85f5245c0352225af50738ebaa72c1e52a2940ee42f5bc99218313ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3e880aa503bbce5a53073f7f735d1defcde092982f39958cd58020b2139b7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cab8f4130435939b220e9c48430b269cfd8f87485157504a5a29f581ff33468c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cab8f4130435939b220e9c48430b269cfd8f87485157504a5a29f581ff33468c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:24Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.356139 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:24Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.365485 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9faab1b3-3f25-40a9-852f-64e14dd51f6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e86d8bd8f77572c3ed3ba515863b0d66b2654865e89c4b05bf47072c458b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f43da97bc3001c1066778d14029bd40271ef42849a6966caaf39da7174890aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z55mw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:24Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.373064 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.373095 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.373107 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.373124 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.373138 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:24Z","lastTransitionTime":"2026-01-23T14:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.379639 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:24Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.405067 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:24Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.418209 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:24Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.430582 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:24Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.444343 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:24Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.475007 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.475057 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.475069 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.475087 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.475100 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:24Z","lastTransitionTime":"2026-01-23T14:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.578956 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.579050 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.579077 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.579110 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.579149 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:24Z","lastTransitionTime":"2026-01-23T14:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.682727 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.682773 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.682783 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.682815 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.682827 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:24Z","lastTransitionTime":"2026-01-23T14:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.713738 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 10:36:51.815175239 +0000 UTC Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.713821 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:24 crc kubenswrapper[4775]: E0123 14:05:24.713990 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.786007 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.786067 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.786081 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.786099 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.786112 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:24Z","lastTransitionTime":"2026-01-23T14:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.888759 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.888833 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.888848 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.888866 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.888880 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:24Z","lastTransitionTime":"2026-01-23T14:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.992305 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.992361 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.992374 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.992394 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:24 crc kubenswrapper[4775]: I0123 14:05:24.992405 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:24Z","lastTransitionTime":"2026-01-23T14:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.096006 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.096086 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.096100 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.096124 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.096141 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:25Z","lastTransitionTime":"2026-01-23T14:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.198877 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.198932 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.198945 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.198962 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.198975 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:25Z","lastTransitionTime":"2026-01-23T14:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.302660 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.302736 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.302751 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.302779 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.302795 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:25Z","lastTransitionTime":"2026-01-23T14:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.406133 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.406191 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.406206 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.406232 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.406248 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:25Z","lastTransitionTime":"2026-01-23T14:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.509326 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.509373 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.509388 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.509406 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.509419 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:25Z","lastTransitionTime":"2026-01-23T14:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.613036 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.613097 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.613110 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.613131 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.613148 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:25Z","lastTransitionTime":"2026-01-23T14:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.713936 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.713910 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 05:10:18.186619784 +0000 UTC Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.714034 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.714094 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:25 crc kubenswrapper[4775]: E0123 14:05:25.714238 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:25 crc kubenswrapper[4775]: E0123 14:05:25.714342 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:25 crc kubenswrapper[4775]: E0123 14:05:25.714438 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.715791 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.715846 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.715861 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.715878 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.715891 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:25Z","lastTransitionTime":"2026-01-23T14:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.819310 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.819373 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.819388 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.819413 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.819428 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:25Z","lastTransitionTime":"2026-01-23T14:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.922341 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.922365 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.922373 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.922387 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:25 crc kubenswrapper[4775]: I0123 14:05:25.922418 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:25Z","lastTransitionTime":"2026-01-23T14:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.025278 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.025327 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.025337 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.025358 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.025369 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:26Z","lastTransitionTime":"2026-01-23T14:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.127573 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.127612 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.127627 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.127647 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.127657 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:26Z","lastTransitionTime":"2026-01-23T14:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.230528 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.230570 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.230582 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.230599 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.230611 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:26Z","lastTransitionTime":"2026-01-23T14:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.334101 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.334634 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.334645 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.334670 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.334689 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:26Z","lastTransitionTime":"2026-01-23T14:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.438111 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.438166 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.438176 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.438197 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.438209 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:26Z","lastTransitionTime":"2026-01-23T14:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.540490 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.540562 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.540574 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.540622 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.540635 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:26Z","lastTransitionTime":"2026-01-23T14:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.594186 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.594225 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.594236 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.594256 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.594269 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:26Z","lastTransitionTime":"2026-01-23T14:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:26 crc kubenswrapper[4775]: E0123 14:05:26.608656 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:26Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.613283 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.613335 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.613348 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.613367 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.613379 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:26Z","lastTransitionTime":"2026-01-23T14:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:26 crc kubenswrapper[4775]: E0123 14:05:26.631522 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:26Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.636833 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.636926 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.636945 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.636968 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.637016 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:26Z","lastTransitionTime":"2026-01-23T14:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:26 crc kubenswrapper[4775]: E0123 14:05:26.655433 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:26Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.661408 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.661494 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.661509 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.661532 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.661554 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:26Z","lastTransitionTime":"2026-01-23T14:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:26 crc kubenswrapper[4775]: E0123 14:05:26.675872 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:26Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.680852 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.680899 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.680913 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.680933 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.680945 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:26Z","lastTransitionTime":"2026-01-23T14:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:26 crc kubenswrapper[4775]: E0123 14:05:26.695740 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:26Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:26 crc kubenswrapper[4775]: E0123 14:05:26.695903 4775 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.697917 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.697946 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.697960 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.697975 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.697984 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:26Z","lastTransitionTime":"2026-01-23T14:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.713369 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:26 crc kubenswrapper[4775]: E0123 14:05:26.713478 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.714706 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 07:06:28.198676518 +0000 UTC Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.800688 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.800754 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.800771 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.800788 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.800822 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:26Z","lastTransitionTime":"2026-01-23T14:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.903513 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.903562 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.903574 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.903593 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:26 crc kubenswrapper[4775]: I0123 14:05:26.903607 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:26Z","lastTransitionTime":"2026-01-23T14:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.006849 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.006935 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.006955 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.006987 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.007014 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:27Z","lastTransitionTime":"2026-01-23T14:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.110355 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.110402 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.110418 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.110437 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.110453 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:27Z","lastTransitionTime":"2026-01-23T14:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.212944 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.213016 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.213035 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.213063 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.213080 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:27Z","lastTransitionTime":"2026-01-23T14:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.315650 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.315734 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.315754 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.315786 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.315833 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:27Z","lastTransitionTime":"2026-01-23T14:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.418422 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.418481 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.418495 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.418519 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.418533 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:27Z","lastTransitionTime":"2026-01-23T14:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.521464 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.521506 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.521527 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.521545 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.521559 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:27Z","lastTransitionTime":"2026-01-23T14:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.624698 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.624745 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.624758 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.624777 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.624790 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:27Z","lastTransitionTime":"2026-01-23T14:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.713851 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.713976 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.713881 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:27 crc kubenswrapper[4775]: E0123 14:05:27.714070 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:27 crc kubenswrapper[4775]: E0123 14:05:27.714145 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:27 crc kubenswrapper[4775]: E0123 14:05:27.714307 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.715721 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 00:18:52.693503781 +0000 UTC Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.727310 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.727338 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.727347 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.727367 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.727384 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:27Z","lastTransitionTime":"2026-01-23T14:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.829931 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.830000 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.830024 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.830056 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.830080 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:27Z","lastTransitionTime":"2026-01-23T14:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.934696 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.934751 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.934770 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.934845 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:27 crc kubenswrapper[4775]: I0123 14:05:27.934866 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:27Z","lastTransitionTime":"2026-01-23T14:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.037756 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.037857 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.037871 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.037897 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.038333 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:28Z","lastTransitionTime":"2026-01-23T14:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.141702 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.141757 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.141768 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.141791 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.141833 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:28Z","lastTransitionTime":"2026-01-23T14:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.245178 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.245235 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.245244 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.245257 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.245266 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:28Z","lastTransitionTime":"2026-01-23T14:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.348237 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.348309 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.348324 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.348379 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.348397 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:28Z","lastTransitionTime":"2026-01-23T14:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.450851 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.450916 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.450935 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.450953 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.450966 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:28Z","lastTransitionTime":"2026-01-23T14:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.553494 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.553571 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.553589 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.553607 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.553620 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:28Z","lastTransitionTime":"2026-01-23T14:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.657014 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.657088 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.657108 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.657135 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.657181 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:28Z","lastTransitionTime":"2026-01-23T14:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.713878 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:28 crc kubenswrapper[4775]: E0123 14:05:28.714113 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.715934 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 22:48:48.525311123 +0000 UTC Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.760508 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.760569 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.760588 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.760613 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.760630 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:28Z","lastTransitionTime":"2026-01-23T14:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.864065 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.864186 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.864217 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.864310 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.864338 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:28Z","lastTransitionTime":"2026-01-23T14:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.967157 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.967205 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.967215 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.967231 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:28 crc kubenswrapper[4775]: I0123 14:05:28.967242 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:28Z","lastTransitionTime":"2026-01-23T14:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.070053 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.070086 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.070098 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.070116 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.070128 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:29Z","lastTransitionTime":"2026-01-23T14:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.173070 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.173115 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.173152 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.173168 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.173180 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:29Z","lastTransitionTime":"2026-01-23T14:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.276885 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.276969 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.276988 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.277076 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.277100 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:29Z","lastTransitionTime":"2026-01-23T14:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.381616 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.381675 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.381694 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.381720 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.381737 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:29Z","lastTransitionTime":"2026-01-23T14:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.485225 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.485302 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.485315 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.485333 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.485343 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:29Z","lastTransitionTime":"2026-01-23T14:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.588889 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.589006 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.589032 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.589075 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.589100 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:29Z","lastTransitionTime":"2026-01-23T14:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.692764 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.692838 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.692849 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.692868 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.692882 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:29Z","lastTransitionTime":"2026-01-23T14:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.713337 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.713390 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.713453 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:29 crc kubenswrapper[4775]: E0123 14:05:29.713545 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:29 crc kubenswrapper[4775]: E0123 14:05:29.713729 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:29 crc kubenswrapper[4775]: E0123 14:05:29.714104 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.716577 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 04:57:49.640538967 +0000 UTC Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.795406 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.795495 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.795521 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.795561 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.795586 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:29Z","lastTransitionTime":"2026-01-23T14:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.898159 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.898227 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.898237 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.898253 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:29 crc kubenswrapper[4775]: I0123 14:05:29.898264 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:29Z","lastTransitionTime":"2026-01-23T14:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.001529 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.001608 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.001621 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.001645 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.001660 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:30Z","lastTransitionTime":"2026-01-23T14:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.104250 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.104295 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.104306 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.104326 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.104340 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:30Z","lastTransitionTime":"2026-01-23T14:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.207375 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.207484 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.207509 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.207586 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.207616 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:30Z","lastTransitionTime":"2026-01-23T14:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.310499 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.310588 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.310612 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.310651 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.310678 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:30Z","lastTransitionTime":"2026-01-23T14:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.413739 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.413778 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.413790 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.413831 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.413844 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:30Z","lastTransitionTime":"2026-01-23T14:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.517608 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.517699 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.517722 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.517759 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.517781 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:30Z","lastTransitionTime":"2026-01-23T14:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.621916 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.621997 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.622022 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.622049 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.622069 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:30Z","lastTransitionTime":"2026-01-23T14:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.713908 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:30 crc kubenswrapper[4775]: E0123 14:05:30.714243 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.716755 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 20:49:11.370837575 +0000 UTC Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.724883 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.724939 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.724954 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.724978 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.724996 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:30Z","lastTransitionTime":"2026-01-23T14:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.828564 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.828630 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.828647 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.828672 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.828687 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:30Z","lastTransitionTime":"2026-01-23T14:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.931389 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.931434 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.931444 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.931459 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:30 crc kubenswrapper[4775]: I0123 14:05:30.931470 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:30Z","lastTransitionTime":"2026-01-23T14:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.033893 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.033938 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.033949 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.033966 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.033978 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:31Z","lastTransitionTime":"2026-01-23T14:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.137268 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.137327 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.137345 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.137368 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.137386 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:31Z","lastTransitionTime":"2026-01-23T14:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.239356 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.239408 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.239425 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.239446 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.239465 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:31Z","lastTransitionTime":"2026-01-23T14:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.341766 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.341825 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.341834 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.341847 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.341855 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:31Z","lastTransitionTime":"2026-01-23T14:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.445282 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.445336 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.445351 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.445370 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.445382 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:31Z","lastTransitionTime":"2026-01-23T14:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.548606 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.548677 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.548701 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.548729 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.548751 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:31Z","lastTransitionTime":"2026-01-23T14:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.651167 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.651202 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.651213 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.651228 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.651241 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:31Z","lastTransitionTime":"2026-01-23T14:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.713149 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.713161 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.713365 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:31 crc kubenswrapper[4775]: E0123 14:05:31.713505 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:31 crc kubenswrapper[4775]: E0123 14:05:31.713995 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:31 crc kubenswrapper[4775]: E0123 14:05:31.714373 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.714638 4775 scope.go:117] "RemoveContainer" containerID="cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.717072 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 23:32:48.458715443 +0000 UTC Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.729277 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.753448 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.753907 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.753993 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.754076 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.754146 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:31Z","lastTransitionTime":"2026-01-23T14:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.856695 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.856943 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.857033 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.857113 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.857180 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:31Z","lastTransitionTime":"2026-01-23T14:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.960293 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.960614 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.960626 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.960647 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:31 crc kubenswrapper[4775]: I0123 14:05:31.960659 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:31Z","lastTransitionTime":"2026-01-23T14:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.062729 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.062766 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.062780 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.062817 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.062830 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:32Z","lastTransitionTime":"2026-01-23T14:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.165993 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.166047 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.166062 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.166083 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.166099 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:32Z","lastTransitionTime":"2026-01-23T14:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.220726 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qrvs8_bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06/ovnkube-controller/2.log" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.225599 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerStarted","Data":"705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157"} Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.226291 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.241110 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04f5b2ad-c277-4ce9-8a8e-1ae658a6820c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3301338273f633b6c32caed6b35db93841743e57f219115ae7c32e16fe4683f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41b0811b85f5245c0352225af50738ebaa72c1e52a2940ee42f5bc99218313ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3e880aa503bbce5a53073f7f735d1defcde092982f39958cd58020b2139b7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cab8f4130435939b220e9c48430b269cfd8f87485157504a5a29f581ff33468c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cab8f4130435939b220e9c48430b269cfd8f87485157504a5a29f581ff33468c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.251532 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78fc63a1-5cdd-4e02-ab5b-bf248837f07f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b281d05f695b9f070f8a73110e3b4ea722b237b9df9a31a80b787bd7ea51fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf0bf3bc741e6d2b5e451b53aec1f510f437f076819f0539f51621db401cb64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf0bf3bc741e6d2b5e451b53aec1f510f437f076819f0539f51621db401cb64f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.269158 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.269233 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.269246 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.269266 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.269307 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:32Z","lastTransitionTime":"2026-01-23T14:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.270200 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.281720 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.291468 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.306010 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f14be984531a60487db2daba36d9cba7f2bbafa8b8d68889c261f3b2260f058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:05:23Z\\\",\\\"message\\\":\\\"2026-01-23T14:04:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_deec2807-d78d-4cb4-94e7-8d84a64fcbe4\\\\n2026-01-23T14:04:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_deec2807-d78d-4cb4-94e7-8d84a64fcbe4 to /host/opt/cni/bin/\\\\n2026-01-23T14:04:38Z [verbose] multus-daemon started\\\\n2026-01-23T14:04:38Z [verbose] Readiness Indicator file check\\\\n2026-01-23T14:05:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.321717 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.333932 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-47lz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ed1a97-c97e-40d0-afdf-260c475dc83f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-47lz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.347335 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.361706 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.373041 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.373111 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.373126 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.373167 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.373185 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:32Z","lastTransitionTime":"2026-01-23T14:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.373407 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9faab1b3-3f25-40a9-852f-64e14dd51f6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e86d8bd8f77572c3ed3ba515863b0d66b2654865e89c4b05bf47072c458b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f43da97bc3001c1066778d14029bd40271ef42849a6966caaf39da7174890aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z55mw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.385466 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.407868 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.422718 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.440426 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.454903 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.473717 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.475913 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.475953 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.475966 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.476014 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.476029 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:32Z","lastTransitionTime":"2026-01-23T14:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.487350 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.514442 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:05:03Z\\\",\\\"message\\\":\\\"er_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0123 14:05:03.714446 6455 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0123 14:05:03.714448 6455 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 14:05:03.714393 6455 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 14:05:03.714525 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:05:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:05:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:32Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.578993 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.579046 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.579059 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.579078 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.579093 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:32Z","lastTransitionTime":"2026-01-23T14:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.682080 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.682149 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.682173 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.682198 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.682214 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:32Z","lastTransitionTime":"2026-01-23T14:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.713082 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:32 crc kubenswrapper[4775]: E0123 14:05:32.713274 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.718128 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 23:16:16.511935552 +0000 UTC Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.785233 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.785278 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.785287 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.785307 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.785316 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:32Z","lastTransitionTime":"2026-01-23T14:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.888300 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.888343 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.888354 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.888373 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.888390 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:32Z","lastTransitionTime":"2026-01-23T14:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.990988 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.991033 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.991048 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.991066 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:32 crc kubenswrapper[4775]: I0123 14:05:32.991078 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:32Z","lastTransitionTime":"2026-01-23T14:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.116644 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.116685 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.116695 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.116708 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.116717 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:33Z","lastTransitionTime":"2026-01-23T14:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.219689 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.219744 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.219760 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.219785 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.219825 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:33Z","lastTransitionTime":"2026-01-23T14:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.323513 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.323574 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.323592 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.323618 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.323636 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:33Z","lastTransitionTime":"2026-01-23T14:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.425926 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.425966 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.425978 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.425998 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.426009 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:33Z","lastTransitionTime":"2026-01-23T14:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.528060 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.528117 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.528135 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.528158 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.528175 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:33Z","lastTransitionTime":"2026-01-23T14:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.630729 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.630767 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.630775 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.630790 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.630815 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:33Z","lastTransitionTime":"2026-01-23T14:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.713889 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.713957 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:33 crc kubenswrapper[4775]: E0123 14:05:33.714101 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.714137 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:33 crc kubenswrapper[4775]: E0123 14:05:33.714291 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:33 crc kubenswrapper[4775]: E0123 14:05:33.714438 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.718916 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 02:19:07.165566257 +0000 UTC Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.730421 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.734054 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.734083 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.734090 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.734102 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.734112 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:33Z","lastTransitionTime":"2026-01-23T14:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.743670 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.763256 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.773618 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.792581 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:05:03Z\\\",\\\"message\\\":\\\"er_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0123 14:05:03.714446 6455 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0123 14:05:03.714448 6455 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 14:05:03.714393 6455 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 14:05:03.714525 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:05:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:05:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.806338 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.821604 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.836181 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.836224 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.836237 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.836257 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.836269 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:33Z","lastTransitionTime":"2026-01-23T14:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.837193 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.850527 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.864584 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.884052 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f14be984531a60487db2daba36d9cba7f2bbafa8b8d68889c261f3b2260f058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:05:23Z\\\",\\\"message\\\":\\\"2026-01-23T14:04:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_deec2807-d78d-4cb4-94e7-8d84a64fcbe4\\\\n2026-01-23T14:04:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_deec2807-d78d-4cb4-94e7-8d84a64fcbe4 to /host/opt/cni/bin/\\\\n2026-01-23T14:04:38Z [verbose] multus-daemon started\\\\n2026-01-23T14:04:38Z [verbose] Readiness Indicator file check\\\\n2026-01-23T14:05:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.903147 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.918740 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-47lz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ed1a97-c97e-40d0-afdf-260c475dc83f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-47lz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.934119 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04f5b2ad-c277-4ce9-8a8e-1ae658a6820c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3301338273f633b6c32caed6b35db93841743e57f219115ae7c32e16fe4683f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41b0811b85f5245c0352225af50738ebaa72c1e52a2940ee42f5bc99218313ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3e880aa503bbce5a53073f7f735d1defcde092982f39958cd58020b2139b7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cab8f4130435939b220e9c48430b269cfd8f87485157504a5a29f581ff33468c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cab8f4130435939b220e9c48430b269cfd8f87485157504a5a29f581ff33468c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.938384 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.938446 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.938461 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.938482 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.938499 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:33Z","lastTransitionTime":"2026-01-23T14:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.946240 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78fc63a1-5cdd-4e02-ab5b-bf248837f07f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b281d05f695b9f070f8a73110e3b4ea722b237b9df9a31a80b787bd7ea51fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf0bf3bc741e6d2b5e451b53aec1f510f437f076819f0539f51621db401cb64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf0bf3bc741e6d2b5e451b53aec1f510f437f076819f0539f51621db401cb64f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.958751 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.974251 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:33 crc kubenswrapper[4775]: I0123 14:05:33.988197 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:33Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.003081 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9faab1b3-3f25-40a9-852f-64e14dd51f6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e86d8bd8f77572c3ed3ba515863b0d66b2654865e89c4b05bf47072c458b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f43da97bc3001c1066778d14029bd40271ef42849a6966caaf39da7174890aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z55mw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.041357 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.041429 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.041454 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.041484 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.041510 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:34Z","lastTransitionTime":"2026-01-23T14:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.143945 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.144009 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.144025 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.144045 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.144057 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:34Z","lastTransitionTime":"2026-01-23T14:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.235487 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qrvs8_bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06/ovnkube-controller/3.log" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.236306 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qrvs8_bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06/ovnkube-controller/2.log" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.239680 4775 generic.go:334] "Generic (PLEG): container finished" podID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerID="705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157" exitCode=1 Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.239734 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerDied","Data":"705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157"} Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.239781 4775 scope.go:117] "RemoveContainer" containerID="cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.240834 4775 scope.go:117] "RemoveContainer" containerID="705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157" Jan 23 14:05:34 crc kubenswrapper[4775]: E0123 14:05:34.241064 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qrvs8_openshift-ovn-kubernetes(bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.247363 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.247404 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.247420 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.247442 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.247460 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:34Z","lastTransitionTime":"2026-01-23T14:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.264549 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.288201 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.307949 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.346610 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:05:03Z\\\",\\\"message\\\":\\\"er_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0123 14:05:03.714446 6455 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0123 14:05:03.714448 6455 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 14:05:03.714393 6455 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 14:05:03.714525 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:05:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:05:33Z\\\",\\\"message\\\":\\\".go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 14:05:32.967045 6836 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 14:05:32.967156 6836 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 14:05:32.967220 6836 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 14:05:32.967277 6836 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 14:05:32.967725 6836 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 14:05:32.967779 6836 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0123 14:05:32.967787 6836 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0123 14:05:32.967869 6836 factory.go:656] Stopping watch factory\\\\nI0123 14:05:32.967869 6836 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 14:05:32.967893 6836 ovnkube.go:599] Stopped ovnkube\\\\nI0123 14:05:32.967883 6836 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:05:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.352765 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.352852 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.352871 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.352895 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.352913 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:34Z","lastTransitionTime":"2026-01-23T14:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.366777 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04f5b2ad-c277-4ce9-8a8e-1ae658a6820c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3301338273f633b6c32caed6b35db93841743e57f219115ae7c32e16fe4683f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41b0811b85f5245c0352225af50738ebaa72c1e52a2940ee42f5bc99218313ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3e880aa503bbce5a53073f7f735d1defcde092982f39958cd58020b2139b7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cab8f4130435939b220e9c48430b269cfd8f87485157504a5a29f581ff33468c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cab8f4130435939b220e9c48430b269cfd8f87485157504a5a29f581ff33468c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.383955 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78fc63a1-5cdd-4e02-ab5b-bf248837f07f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b281d05f695b9f070f8a73110e3b4ea722b237b9df9a31a80b787bd7ea51fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf0bf3bc741e6d2b5e451b53aec1f510f437f076819f0539f51621db401cb64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf0bf3bc741e6d2b5e451b53aec1f510f437f076819f0539f51621db401cb64f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.401297 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.420605 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.437962 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.456276 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.456331 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.456917 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.456965 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.456988 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:34Z","lastTransitionTime":"2026-01-23T14:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.459203 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f14be984531a60487db2daba36d9cba7f2bbafa8b8d68889c261f3b2260f058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:05:23Z\\\",\\\"message\\\":\\\"2026-01-23T14:04:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_deec2807-d78d-4cb4-94e7-8d84a64fcbe4\\\\n2026-01-23T14:04:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_deec2807-d78d-4cb4-94e7-8d84a64fcbe4 to /host/opt/cni/bin/\\\\n2026-01-23T14:04:38Z [verbose] multus-daemon started\\\\n2026-01-23T14:04:38Z [verbose] Readiness Indicator file check\\\\n2026-01-23T14:05:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.477963 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.489171 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-47lz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ed1a97-c97e-40d0-afdf-260c475dc83f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-47lz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.503503 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.516159 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.527233 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9faab1b3-3f25-40a9-852f-64e14dd51f6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e86d8bd8f77572c3ed3ba515863b0d66b2654865e89c4b05bf47072c458b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f43da97bc3001c1066778d14029bd40271ef42849a6966caaf39da7174890aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z55mw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.539543 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.559208 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.559615 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.559648 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.559660 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.559678 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.559689 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:34Z","lastTransitionTime":"2026-01-23T14:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.572229 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.584100 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:34Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.662344 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.662379 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.662391 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.662408 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.662420 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:34Z","lastTransitionTime":"2026-01-23T14:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.713193 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:34 crc kubenswrapper[4775]: E0123 14:05:34.713349 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.719264 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 08:19:24.488313815 +0000 UTC Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.765658 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.765730 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.765749 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.765779 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.765826 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:34Z","lastTransitionTime":"2026-01-23T14:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.871317 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.871386 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.871432 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.871475 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.871488 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:34Z","lastTransitionTime":"2026-01-23T14:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.974553 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.974638 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.974662 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.975217 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:34 crc kubenswrapper[4775]: I0123 14:05:34.975464 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:34Z","lastTransitionTime":"2026-01-23T14:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.079706 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.079772 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.079792 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.079846 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.079868 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:35Z","lastTransitionTime":"2026-01-23T14:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.182910 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.182997 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.183016 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.183045 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.183065 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:35Z","lastTransitionTime":"2026-01-23T14:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.245332 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qrvs8_bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06/ovnkube-controller/3.log" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.285651 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.285712 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.285731 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.285755 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.285773 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:35Z","lastTransitionTime":"2026-01-23T14:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.388276 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.388331 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.388349 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.388373 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.388390 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:35Z","lastTransitionTime":"2026-01-23T14:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.491831 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.491903 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.491923 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.491946 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.491963 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:35Z","lastTransitionTime":"2026-01-23T14:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.594955 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.595010 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.595022 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.595038 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.595050 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:35Z","lastTransitionTime":"2026-01-23T14:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.663194 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.663387 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:35 crc kubenswrapper[4775]: E0123 14:05:35.663474 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:39.66343762 +0000 UTC m=+146.658266360 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.663543 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:35 crc kubenswrapper[4775]: E0123 14:05:35.663617 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 14:05:35 crc kubenswrapper[4775]: E0123 14:05:35.663667 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 14:05:35 crc kubenswrapper[4775]: E0123 14:05:35.663689 4775 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:05:35 crc kubenswrapper[4775]: E0123 14:05:35.663709 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 14:05:35 crc kubenswrapper[4775]: E0123 14:05:35.663740 4775 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 14:05:35 crc kubenswrapper[4775]: E0123 14:05:35.663761 4775 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:05:35 crc kubenswrapper[4775]: E0123 14:05:35.663771 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-23 14:06:39.663743968 +0000 UTC m=+146.658572748 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:05:35 crc kubenswrapper[4775]: E0123 14:05:35.663787 4775 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.663638 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:35 crc kubenswrapper[4775]: E0123 14:05:35.663867 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-23 14:06:39.663841921 +0000 UTC m=+146.658670691 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 14:05:35 crc kubenswrapper[4775]: E0123 14:05:35.663899 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 14:06:39.663886142 +0000 UTC m=+146.658714922 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.663934 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:35 crc kubenswrapper[4775]: E0123 14:05:35.664030 4775 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 14:05:35 crc kubenswrapper[4775]: E0123 14:05:35.664063 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-23 14:06:39.664054757 +0000 UTC m=+146.658883597 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.698232 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.698296 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.698312 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.698337 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.698353 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:35Z","lastTransitionTime":"2026-01-23T14:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.713196 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:35 crc kubenswrapper[4775]: E0123 14:05:35.713332 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.713532 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:35 crc kubenswrapper[4775]: E0123 14:05:35.713596 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.713717 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:35 crc kubenswrapper[4775]: E0123 14:05:35.713775 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.719638 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 19:15:11.893166892 +0000 UTC Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.802109 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.802216 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.802250 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.802279 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.802298 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:35Z","lastTransitionTime":"2026-01-23T14:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.906797 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.906869 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.906882 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.906901 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:35 crc kubenswrapper[4775]: I0123 14:05:35.906915 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:35Z","lastTransitionTime":"2026-01-23T14:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.010784 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.010894 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.010912 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.010937 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.010956 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:36Z","lastTransitionTime":"2026-01-23T14:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.114382 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.114419 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.114435 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.114453 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.114463 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:36Z","lastTransitionTime":"2026-01-23T14:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.262462 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.262526 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.262539 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.262558 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.262573 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:36Z","lastTransitionTime":"2026-01-23T14:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.365601 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.365632 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.365643 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.365659 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.365671 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:36Z","lastTransitionTime":"2026-01-23T14:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.468345 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.468404 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.468425 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.468448 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.468469 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:36Z","lastTransitionTime":"2026-01-23T14:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.571290 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.571324 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.571335 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.571351 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.571360 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:36Z","lastTransitionTime":"2026-01-23T14:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.673847 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.673892 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.673903 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.673919 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.673931 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:36Z","lastTransitionTime":"2026-01-23T14:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.713686 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:36 crc kubenswrapper[4775]: E0123 14:05:36.713883 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.720738 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 00:03:41.16099696 +0000 UTC Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.776151 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.776202 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.776223 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.776244 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.776258 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:36Z","lastTransitionTime":"2026-01-23T14:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.879059 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.879144 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.879167 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.879197 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.879219 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:36Z","lastTransitionTime":"2026-01-23T14:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.983235 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.983316 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.983344 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.983381 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:36 crc kubenswrapper[4775]: I0123 14:05:36.983406 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:36Z","lastTransitionTime":"2026-01-23T14:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.056549 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.056618 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.056635 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.056665 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.056685 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:37Z","lastTransitionTime":"2026-01-23T14:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:37 crc kubenswrapper[4775]: E0123 14:05:37.081381 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.087399 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.087467 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.087489 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.087534 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.087554 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:37Z","lastTransitionTime":"2026-01-23T14:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:37 crc kubenswrapper[4775]: E0123 14:05:37.105971 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.112257 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.112331 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.112355 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.112390 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.112415 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:37Z","lastTransitionTime":"2026-01-23T14:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:37 crc kubenswrapper[4775]: E0123 14:05:37.132814 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.138710 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.138818 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.138838 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.138863 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.138884 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:37Z","lastTransitionTime":"2026-01-23T14:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:37 crc kubenswrapper[4775]: E0123 14:05:37.157047 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.163543 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.163600 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.163612 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.163631 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.163646 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:37Z","lastTransitionTime":"2026-01-23T14:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:37 crc kubenswrapper[4775]: E0123 14:05:37.183850 4775 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a063d3a2-7692-443a-9621-c3db4caa1aba\\\",\\\"systemUUID\\\":\\\"8a5d5c8e-ecf7-49d1-850c-74e085cfc75c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:37Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:37 crc kubenswrapper[4775]: E0123 14:05:37.184033 4775 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.186287 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.186368 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.186387 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.186418 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.186440 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:37Z","lastTransitionTime":"2026-01-23T14:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.289912 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.290021 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.290053 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.290101 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.290123 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:37Z","lastTransitionTime":"2026-01-23T14:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.393305 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.393407 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.393426 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.393446 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.393461 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:37Z","lastTransitionTime":"2026-01-23T14:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.496721 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.496774 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.496784 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.496824 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.496849 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:37Z","lastTransitionTime":"2026-01-23T14:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.600761 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.600833 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.600844 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.600871 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.600884 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:37Z","lastTransitionTime":"2026-01-23T14:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.703678 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.703748 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.703764 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.703787 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.703850 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:37Z","lastTransitionTime":"2026-01-23T14:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.713918 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.713996 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.713914 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:37 crc kubenswrapper[4775]: E0123 14:05:37.714109 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:37 crc kubenswrapper[4775]: E0123 14:05:37.714241 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:37 crc kubenswrapper[4775]: E0123 14:05:37.714455 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.722772 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 08:37:05.141246284 +0000 UTC Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.805772 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.805827 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.805837 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.805851 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.805861 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:37Z","lastTransitionTime":"2026-01-23T14:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.908667 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.908714 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.908722 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.908736 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:37 crc kubenswrapper[4775]: I0123 14:05:37.908745 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:37Z","lastTransitionTime":"2026-01-23T14:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.012111 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.012229 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.012242 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.012268 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.012285 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:38Z","lastTransitionTime":"2026-01-23T14:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.115262 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.115341 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.115357 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.115383 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.115397 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:38Z","lastTransitionTime":"2026-01-23T14:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.220083 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.220159 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.220187 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.220225 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.220252 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:38Z","lastTransitionTime":"2026-01-23T14:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.323841 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.324575 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.324591 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.324614 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.324629 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:38Z","lastTransitionTime":"2026-01-23T14:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.428325 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.428390 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.428404 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.428430 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.428445 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:38Z","lastTransitionTime":"2026-01-23T14:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.531393 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.531925 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.532081 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.532307 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.532462 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:38Z","lastTransitionTime":"2026-01-23T14:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.636042 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.636104 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.636123 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.636149 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.636167 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:38Z","lastTransitionTime":"2026-01-23T14:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.713468 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:38 crc kubenswrapper[4775]: E0123 14:05:38.713686 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.723637 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 06:24:36.791156414 +0000 UTC Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.739102 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.739148 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.739159 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.739178 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.739193 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:38Z","lastTransitionTime":"2026-01-23T14:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.843567 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.843630 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.843647 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.843673 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.843690 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:38Z","lastTransitionTime":"2026-01-23T14:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.946369 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.946419 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.946430 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.946447 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:38 crc kubenswrapper[4775]: I0123 14:05:38.946462 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:38Z","lastTransitionTime":"2026-01-23T14:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.050115 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.050171 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.050182 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.050203 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.050214 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:39Z","lastTransitionTime":"2026-01-23T14:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.154046 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.154092 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.154102 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.154120 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.154133 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:39Z","lastTransitionTime":"2026-01-23T14:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.275987 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.276062 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.276083 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.276111 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.276134 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:39Z","lastTransitionTime":"2026-01-23T14:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.378478 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.378522 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.378533 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.378557 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.378577 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:39Z","lastTransitionTime":"2026-01-23T14:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.481515 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.481572 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.481589 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.481611 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.481631 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:39Z","lastTransitionTime":"2026-01-23T14:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.584794 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.584910 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.584930 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.584953 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.584972 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:39Z","lastTransitionTime":"2026-01-23T14:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.688889 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.688969 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.688991 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.689020 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.689048 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:39Z","lastTransitionTime":"2026-01-23T14:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.713028 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.713128 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.713250 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:39 crc kubenswrapper[4775]: E0123 14:05:39.713495 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:39 crc kubenswrapper[4775]: E0123 14:05:39.713662 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:39 crc kubenswrapper[4775]: E0123 14:05:39.713837 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.724753 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 15:27:13.173459325 +0000 UTC Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.792762 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.792868 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.792886 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.792913 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.792938 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:39Z","lastTransitionTime":"2026-01-23T14:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.896340 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.896401 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.896421 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.896446 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:39 crc kubenswrapper[4775]: I0123 14:05:39.896464 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:39Z","lastTransitionTime":"2026-01-23T14:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.000483 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.000559 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.000604 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.000635 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.000652 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:40Z","lastTransitionTime":"2026-01-23T14:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.104197 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.104286 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.104306 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.104337 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.104358 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:40Z","lastTransitionTime":"2026-01-23T14:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.207920 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.207997 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.208015 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.208039 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.208059 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:40Z","lastTransitionTime":"2026-01-23T14:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.312129 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.312235 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.312257 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.312285 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.312304 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:40Z","lastTransitionTime":"2026-01-23T14:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.423180 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.424121 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.424143 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.424160 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.424170 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:40Z","lastTransitionTime":"2026-01-23T14:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.527479 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.527517 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.527529 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.527546 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.527559 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:40Z","lastTransitionTime":"2026-01-23T14:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.630050 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.630171 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.630201 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.630233 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.630330 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:40Z","lastTransitionTime":"2026-01-23T14:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.713508 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:40 crc kubenswrapper[4775]: E0123 14:05:40.713754 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.725675 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 08:07:06.341287461 +0000 UTC Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.732962 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.733000 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.733012 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.733027 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.733037 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:40Z","lastTransitionTime":"2026-01-23T14:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.836783 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.836949 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.836969 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.836993 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.837012 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:40Z","lastTransitionTime":"2026-01-23T14:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.940221 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.940267 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.940282 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.940304 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:40 crc kubenswrapper[4775]: I0123 14:05:40.940321 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:40Z","lastTransitionTime":"2026-01-23T14:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.043760 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.043878 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.043909 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.043939 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.043958 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:41Z","lastTransitionTime":"2026-01-23T14:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.146377 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.146469 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.146494 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.146525 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.146552 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:41Z","lastTransitionTime":"2026-01-23T14:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.249485 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.249549 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.249573 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.249602 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.249624 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:41Z","lastTransitionTime":"2026-01-23T14:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.352170 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.352229 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.352254 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.352275 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.352291 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:41Z","lastTransitionTime":"2026-01-23T14:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.456259 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.456346 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.456366 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.456392 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.456450 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:41Z","lastTransitionTime":"2026-01-23T14:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.559255 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.559396 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.559432 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.559459 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.559479 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:41Z","lastTransitionTime":"2026-01-23T14:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.662758 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.662863 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.662884 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.662907 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.662926 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:41Z","lastTransitionTime":"2026-01-23T14:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.713619 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.713643 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.713902 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:41 crc kubenswrapper[4775]: E0123 14:05:41.714038 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:41 crc kubenswrapper[4775]: E0123 14:05:41.714346 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:41 crc kubenswrapper[4775]: E0123 14:05:41.714429 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.726271 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 00:52:56.129420744 +0000 UTC Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.766199 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.766255 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.766271 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.766366 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.766390 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:41Z","lastTransitionTime":"2026-01-23T14:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.869974 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.870038 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.870047 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.870076 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.870086 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:41Z","lastTransitionTime":"2026-01-23T14:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.973579 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.973618 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.973629 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.973643 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:41 crc kubenswrapper[4775]: I0123 14:05:41.973653 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:41Z","lastTransitionTime":"2026-01-23T14:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.077297 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.077362 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.077378 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.077402 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.077422 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:42Z","lastTransitionTime":"2026-01-23T14:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.180219 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.180286 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.180309 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.180338 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.180359 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:42Z","lastTransitionTime":"2026-01-23T14:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.283252 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.283341 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.283365 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.283399 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.283423 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:42Z","lastTransitionTime":"2026-01-23T14:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.387405 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.387518 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.387542 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.387567 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.387590 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:42Z","lastTransitionTime":"2026-01-23T14:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.490517 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.490585 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.490608 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.490639 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.490659 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:42Z","lastTransitionTime":"2026-01-23T14:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.594045 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.594116 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.594137 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.594197 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.594218 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:42Z","lastTransitionTime":"2026-01-23T14:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.696478 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.696508 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.696516 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.696533 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.696545 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:42Z","lastTransitionTime":"2026-01-23T14:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.713705 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:42 crc kubenswrapper[4775]: E0123 14:05:42.713854 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.726418 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 08:05:53.099374614 +0000 UTC Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.799071 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.799098 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.799107 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.799119 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.799128 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:42Z","lastTransitionTime":"2026-01-23T14:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.902093 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.902182 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.902202 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.902223 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:42 crc kubenswrapper[4775]: I0123 14:05:42.902238 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:42Z","lastTransitionTime":"2026-01-23T14:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.005316 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.005418 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.005441 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.005471 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.005494 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:43Z","lastTransitionTime":"2026-01-23T14:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.108969 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.109033 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.109051 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.109076 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.109094 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:43Z","lastTransitionTime":"2026-01-23T14:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.212267 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.212327 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.212344 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.212367 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.212385 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:43Z","lastTransitionTime":"2026-01-23T14:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.315682 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.315728 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.315748 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.315763 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.315775 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:43Z","lastTransitionTime":"2026-01-23T14:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.418857 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.418920 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.418938 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.418963 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.418981 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:43Z","lastTransitionTime":"2026-01-23T14:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.522337 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.522411 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.522435 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.522466 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.522487 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:43Z","lastTransitionTime":"2026-01-23T14:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.626259 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.626327 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.626349 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.626375 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.626391 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:43Z","lastTransitionTime":"2026-01-23T14:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.715089 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.715172 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:43 crc kubenswrapper[4775]: E0123 14:05:43.715363 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:43 crc kubenswrapper[4775]: E0123 14:05:43.715476 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.715641 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:43 crc kubenswrapper[4775]: E0123 14:05:43.715717 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.727397 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 01:17:13.995998893 +0000 UTC Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.729384 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.729533 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.729556 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.729583 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.729599 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:43Z","lastTransitionTime":"2026-01-23T14:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.740775 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0977f59d-f8ab-406f-adf0-f3ac44424242\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0123 14:04:16.300293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0123 14:04:16.301564 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2142879974/tls.crt::/tmp/serving-cert-2142879974/tls.key\\\\\\\"\\\\nI0123 14:04:31.531849 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 14:04:31.534538 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 14:04:31.534557 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 14:04:31.534584 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 14:04:31.534589 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 14:04:31.542050 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 14:04:31.542101 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542111 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 14:04:31.542120 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 14:04:31.542127 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 14:04:31.542132 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 14:04:31.542138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0123 14:04:31.542463 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0123 14:04:31.545117 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.758850 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.776940 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4fea0767-0566-4214-855d-ed0373946271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://294e883c862812ede5342f361adda5b828ea9f64711bfc026d45d6df021d4529\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tbc24\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-4q9qg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.798838 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cda6d9be40b2420198dfc660d56febc71295bdf64935938a416eec769b10f6ba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:05:03Z\\\",\\\"message\\\":\\\"er_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0123 14:05:03.714446 6455 base_network_controller_pods.go:477] [default/openshift-network-console/networking-console-plugin-85b44fc459-gdk6g] creating logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0123 14:05:03.714448 6455 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0123 14:05:03.714393 6455 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-console/console]} name:Service_openshift-console/console_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.194:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {d7d7b270-1480-47f8-bdf9-690dbab310cb}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0123 14:05:03.714525 6455 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:05:02Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:05:33Z\\\",\\\"message\\\":\\\".go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0123 14:05:32.967045 6836 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 14:05:32.967156 6836 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 14:05:32.967220 6836 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0123 14:05:32.967277 6836 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0123 14:05:32.967725 6836 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0123 14:05:32.967779 6836 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0123 14:05:32.967787 6836 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0123 14:05:32.967869 6836 factory.go:656] Stopping watch factory\\\\nI0123 14:05:32.967869 6836 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0123 14:05:32.967893 6836 ovnkube.go:599] Stopped ovnkube\\\\nI0123 14:05:32.967883 6836 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:05:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d6jls\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qrvs8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.816053 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd95cd2-5d8c-4e14-bc94-67bb80749037\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2cadaf09282b48db63bf8a04d5ffb7e9b2d7ef471589b2029fa52ebfeba8f060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a14d6c874845ad030dbf165b47f5c984e11145da3530f3326958e4d34760083\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5032d8ac19db43f0458075e71595421f095c01eac4a46c5edffd34269cb44be0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c96dfb1816b412dd74d1b2370f2dadc05cb885c1d711d09bd27d7ac83f0a4faa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://41fd3b94e6f10eae4545d00a3795bb53455288ba681c635d5aa0d6c5a92aba2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc62af14f06b908f41742b323db87abc3b4e77cc1f09a8accf8753394d5f2cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf7ac25699709cce8192f5557945f250af63a969d336e4a791f66cb10f87b988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gddb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-8j5kp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.832503 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.832558 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.832568 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.832612 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.832624 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:43Z","lastTransitionTime":"2026-01-23T14:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.832893 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-47lz2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"63ed1a97-c97e-40d0-afdf-260c475dc83f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:50Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cgjq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:50Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-47lz2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.850615 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04f5b2ad-c277-4ce9-8a8e-1ae658a6820c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3301338273f633b6c32caed6b35db93841743e57f219115ae7c32e16fe4683f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41b0811b85f5245c0352225af50738ebaa72c1e52a2940ee42f5bc99218313ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3e880aa503bbce5a53073f7f735d1defcde092982f39958cd58020b2139b7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cab8f4130435939b220e9c48430b269cfd8f87485157504a5a29f581ff33468c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cab8f4130435939b220e9c48430b269cfd8f87485157504a5a29f581ff33468c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.865525 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"78fc63a1-5cdd-4e02-ab5b-bf248837f07f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6b281d05f695b9f070f8a73110e3b4ea722b237b9df9a31a80b787bd7ea51fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf0bf3bc741e6d2b5e451b53aec1f510f437f076819f0539f51621db401cb64f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cf0bf3bc741e6d2b5e451b53aec1f510f437f076819f0539f51621db401cb64f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.878196 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://62309dc46f4100ec9b831ee395e5232484c3c8b36f62c6f94d636a548f342dde\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27902c2b49c14724993c21727eca6c37f7f3be92477445746a003cb7f4b89573\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.889328 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.898949 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kv8zk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c6e25021-b268-4a6c-851d-43eb5504a3d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a92740131890387a6d9ca3b63d32f7045b84800fe1155eb67b7c81ac6ff9c50f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fmxcw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kv8zk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.910494 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hpxpf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba4447c0-bada-49eb-b6b4-b25feff627a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f14be984531a60487db2daba36d9cba7f2bbafa8b8d68889c261f3b2260f058\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-23T14:05:23Z\\\",\\\"message\\\":\\\"2026-01-23T14:04:37+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_deec2807-d78d-4cb4-94e7-8d84a64fcbe4\\\\n2026-01-23T14:04:37+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_deec2807-d78d-4cb4-94e7-8d84a64fcbe4 to /host/opt/cni/bin/\\\\n2026-01-23T14:04:38Z [verbose] multus-daemon started\\\\n2026-01-23T14:04:38Z [verbose] Readiness Indicator file check\\\\n2026-01-23T14:05:23Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:36Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:05:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v9shl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:36Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hpxpf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.923099 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b5d9437e268240adf726797ed173438804dac1ce382ac82721cb60d8b8970f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.934384 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.935388 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.935450 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.935463 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.935482 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.935492 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:43Z","lastTransitionTime":"2026-01-23T14:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.947681 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9faab1b3-3f25-40a9-852f-64e14dd51f6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c3e86d8bd8f77572c3ed3ba515863b0d66b2654865e89c4b05bf47072c458b9a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f43da97bc3001c1066778d14029bd40271ef42849a6966caaf39da7174890aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-95ckj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z55mw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.959463 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7d5bb46d-df53-4b3b-b3a6-f8c2567e2d7c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://755d6a9b4fdb33f0685190a274ab99b92c166791e5cd33cbe32f108423167b50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0f7577fd7770a66f9e6d3ec3d26ef25cc8fd28663d8db9bbce37be2086f7702\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e52d817b728c2ad895d39d14d95b4e82e448851f2c0bc8f17f73366e961d41df\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.976472 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cc579122-b138-460a-9e65-b246704f2911\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cec46bfb314e0bdf82966bb39e3aa2a426370b6d9dbc509c34bebc8946ec3716\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae1aaf5947c481b75920d1e2bbb12756b5b1e19324a2fe615f9144370f90842\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bac2f908996feb34cb7d119e4f994c49a588468a25740d1cfdd4c376b8c8377c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6692054df185ab511c5169fc769988d5271779682d4d8e28d883d818b0fb4687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0482a074f15ca4ebe0fc0413556baafc5e24332e88c4fed410c243f8394da7b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7489bbc4f4cbbc6b54932fdc17460a81191f5b99a09dfb99f77f401958d045e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a6f3f181d5a4723fba3fef27c21e90653caa5586a0dc1357c66510c81a0876b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc7fd6351e97730c16df25104ece5146ca06942ccb7a31fe5afd9debe7f2986\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T14:04:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T14:04:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:13Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.988083 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:35Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f6e1a45461c3469d2dcafea7a815f13ee8775715d909ad787f3c5026f4d67f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:43 crc kubenswrapper[4775]: I0123 14:05:43.997936 4775 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dwmhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5473290b-b658-4193-9287-af63cfc2a1c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T14:04:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5197b3c00a6fcb270a1d4e5453a9d8fd41d017755600954bb54c8b4ad6dde29b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T14:04:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qtgsg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T14:04:38Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dwmhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-23T14:05:43Z is after 2025-08-24T17:21:41Z" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.037852 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.037895 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.037912 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.037933 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.037946 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:44Z","lastTransitionTime":"2026-01-23T14:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.141450 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.141537 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.141570 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.141605 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.141628 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:44Z","lastTransitionTime":"2026-01-23T14:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.245217 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.245262 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.245271 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.245288 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.245298 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:44Z","lastTransitionTime":"2026-01-23T14:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.348094 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.348145 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.348161 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.348185 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.348204 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:44Z","lastTransitionTime":"2026-01-23T14:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.451839 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.451883 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.451893 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.451908 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.451918 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:44Z","lastTransitionTime":"2026-01-23T14:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.554903 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.554946 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.554956 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.554970 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.554979 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:44Z","lastTransitionTime":"2026-01-23T14:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.657883 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.657919 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.657931 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.657948 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.657961 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:44Z","lastTransitionTime":"2026-01-23T14:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.712958 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:44 crc kubenswrapper[4775]: E0123 14:05:44.713147 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.727670 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 08:51:43.217096209 +0000 UTC Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.760993 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.761034 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.761049 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.761071 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.761085 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:44Z","lastTransitionTime":"2026-01-23T14:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.864073 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.864162 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.864186 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.864216 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.864236 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:44Z","lastTransitionTime":"2026-01-23T14:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.967327 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.967369 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.967380 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.967394 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:44 crc kubenswrapper[4775]: I0123 14:05:44.967403 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:44Z","lastTransitionTime":"2026-01-23T14:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.071444 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.071487 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.071495 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.071510 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.071520 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:45Z","lastTransitionTime":"2026-01-23T14:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.175113 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.175149 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.175159 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.175175 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.175186 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:45Z","lastTransitionTime":"2026-01-23T14:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.277920 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.278009 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.278062 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.278093 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.278123 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:45Z","lastTransitionTime":"2026-01-23T14:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.381423 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.381485 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.381507 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.381534 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.381554 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:45Z","lastTransitionTime":"2026-01-23T14:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.484417 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.484492 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.484509 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.484533 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.484554 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:45Z","lastTransitionTime":"2026-01-23T14:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.587692 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.587754 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.587772 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.587798 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.587840 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:45Z","lastTransitionTime":"2026-01-23T14:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.691340 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.691402 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.691426 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.691455 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.691477 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:45Z","lastTransitionTime":"2026-01-23T14:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.713247 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.713311 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:45 crc kubenswrapper[4775]: E0123 14:05:45.713474 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.713589 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:45 crc kubenswrapper[4775]: E0123 14:05:45.713758 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:45 crc kubenswrapper[4775]: E0123 14:05:45.713943 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.728290 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 15:28:47.819455143 +0000 UTC Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.794173 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.794242 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.794262 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.794299 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.794338 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:45Z","lastTransitionTime":"2026-01-23T14:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.897911 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.897985 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.898006 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.898030 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:45 crc kubenswrapper[4775]: I0123 14:05:45.898048 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:45Z","lastTransitionTime":"2026-01-23T14:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.001131 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.001160 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.001169 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.001198 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.001210 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:46Z","lastTransitionTime":"2026-01-23T14:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.104524 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.104576 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.104594 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.104613 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.104628 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:46Z","lastTransitionTime":"2026-01-23T14:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.208598 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.208668 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.208680 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.208700 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.208718 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:46Z","lastTransitionTime":"2026-01-23T14:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.311413 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.311503 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.311514 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.311532 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.311545 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:46Z","lastTransitionTime":"2026-01-23T14:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.415099 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.415153 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.415170 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.415193 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.415209 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:46Z","lastTransitionTime":"2026-01-23T14:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.517440 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.517482 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.517490 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.517503 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.517512 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:46Z","lastTransitionTime":"2026-01-23T14:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.621204 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.621275 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.621295 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.621327 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.621349 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:46Z","lastTransitionTime":"2026-01-23T14:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.714041 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:46 crc kubenswrapper[4775]: E0123 14:05:46.714551 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.725891 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.725952 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.725970 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.725998 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.726018 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:46Z","lastTransitionTime":"2026-01-23T14:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.728950 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 17:55:57.242216556 +0000 UTC Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.829209 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.829310 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.829342 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.829382 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.829412 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:46Z","lastTransitionTime":"2026-01-23T14:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.933271 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.933352 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.933372 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.933398 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:46 crc kubenswrapper[4775]: I0123 14:05:46.933418 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:46Z","lastTransitionTime":"2026-01-23T14:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.037083 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.037148 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.037175 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.037197 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.037215 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:47Z","lastTransitionTime":"2026-01-23T14:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.142230 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.142376 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.142404 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.142437 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.142472 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:47Z","lastTransitionTime":"2026-01-23T14:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.247570 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.247648 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.247668 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.247695 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.247716 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:47Z","lastTransitionTime":"2026-01-23T14:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.268791 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.268867 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.268887 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.268917 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.268930 4775 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T14:05:47Z","lastTransitionTime":"2026-01-23T14:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.343960 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-b8gsr"] Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.344660 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b8gsr" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.348659 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.348794 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.348846 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.352338 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.398170 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=76.398141921 podStartE2EDuration="1m16.398141921s" podCreationTimestamp="2026-01-23 14:04:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:05:47.398046358 +0000 UTC m=+94.392875128" watchObservedRunningTime="2026-01-23 14:05:47.398141921 +0000 UTC m=+94.392970701" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.404251 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02c12c8d-0376-46e2-9b11-42ffa6ee2a4d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-b8gsr\" (UID: \"02c12c8d-0376-46e2-9b11-42ffa6ee2a4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b8gsr" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.404340 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02c12c8d-0376-46e2-9b11-42ffa6ee2a4d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-b8gsr\" (UID: \"02c12c8d-0376-46e2-9b11-42ffa6ee2a4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b8gsr" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.404370 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/02c12c8d-0376-46e2-9b11-42ffa6ee2a4d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-b8gsr\" (UID: \"02c12c8d-0376-46e2-9b11-42ffa6ee2a4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b8gsr" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.404561 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/02c12c8d-0376-46e2-9b11-42ffa6ee2a4d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-b8gsr\" (UID: \"02c12c8d-0376-46e2-9b11-42ffa6ee2a4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b8gsr" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.404636 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/02c12c8d-0376-46e2-9b11-42ffa6ee2a4d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-b8gsr\" (UID: \"02c12c8d-0376-46e2-9b11-42ffa6ee2a4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b8gsr" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.437154 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podStartSLOduration=71.437133474 podStartE2EDuration="1m11.437133474s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:05:47.436876137 +0000 UTC m=+94.431704887" watchObservedRunningTime="2026-01-23 14:05:47.437133474 +0000 UTC m=+94.431962234" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.488734 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-kv8zk" podStartSLOduration=72.488714495 podStartE2EDuration="1m12.488714495s" podCreationTimestamp="2026-01-23 14:04:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:05:47.488207061 +0000 UTC m=+94.483035841" watchObservedRunningTime="2026-01-23 14:05:47.488714495 +0000 UTC m=+94.483543235" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.506152 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02c12c8d-0376-46e2-9b11-42ffa6ee2a4d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-b8gsr\" (UID: \"02c12c8d-0376-46e2-9b11-42ffa6ee2a4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b8gsr" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.506236 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/02c12c8d-0376-46e2-9b11-42ffa6ee2a4d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-b8gsr\" (UID: \"02c12c8d-0376-46e2-9b11-42ffa6ee2a4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b8gsr" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.506308 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/02c12c8d-0376-46e2-9b11-42ffa6ee2a4d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-b8gsr\" (UID: \"02c12c8d-0376-46e2-9b11-42ffa6ee2a4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b8gsr" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.506346 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/02c12c8d-0376-46e2-9b11-42ffa6ee2a4d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-b8gsr\" (UID: \"02c12c8d-0376-46e2-9b11-42ffa6ee2a4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b8gsr" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.506383 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/02c12c8d-0376-46e2-9b11-42ffa6ee2a4d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-b8gsr\" (UID: \"02c12c8d-0376-46e2-9b11-42ffa6ee2a4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b8gsr" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.506403 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02c12c8d-0376-46e2-9b11-42ffa6ee2a4d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-b8gsr\" (UID: \"02c12c8d-0376-46e2-9b11-42ffa6ee2a4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b8gsr" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.506493 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-hpxpf" podStartSLOduration=71.50646927 podStartE2EDuration="1m11.50646927s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:05:47.50610021 +0000 UTC m=+94.500928990" watchObservedRunningTime="2026-01-23 14:05:47.50646927 +0000 UTC m=+94.501298010" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.506539 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/02c12c8d-0376-46e2-9b11-42ffa6ee2a4d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-b8gsr\" (UID: \"02c12c8d-0376-46e2-9b11-42ffa6ee2a4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b8gsr" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.507725 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/02c12c8d-0376-46e2-9b11-42ffa6ee2a4d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-b8gsr\" (UID: \"02c12c8d-0376-46e2-9b11-42ffa6ee2a4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b8gsr" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.516671 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02c12c8d-0376-46e2-9b11-42ffa6ee2a4d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-b8gsr\" (UID: \"02c12c8d-0376-46e2-9b11-42ffa6ee2a4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b8gsr" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.537141 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/02c12c8d-0376-46e2-9b11-42ffa6ee2a4d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-b8gsr\" (UID: \"02c12c8d-0376-46e2-9b11-42ffa6ee2a4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b8gsr" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.540431 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-8j5kp" podStartSLOduration=71.540404598 podStartE2EDuration="1m11.540404598s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:05:47.524450121 +0000 UTC m=+94.519278861" watchObservedRunningTime="2026-01-23 14:05:47.540404598 +0000 UTC m=+94.535233338" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.575075 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=38.575053686 podStartE2EDuration="38.575053686s" podCreationTimestamp="2026-01-23 14:05:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:05:47.560010763 +0000 UTC m=+94.554839523" watchObservedRunningTime="2026-01-23 14:05:47.575053686 +0000 UTC m=+94.569882426" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.594300 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=16.59426658 podStartE2EDuration="16.59426658s" podCreationTimestamp="2026-01-23 14:05:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:05:47.575849157 +0000 UTC m=+94.570677897" watchObservedRunningTime="2026-01-23 14:05:47.59426658 +0000 UTC m=+94.589095330" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.669459 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z55mw" podStartSLOduration=71.669440622 podStartE2EDuration="1m11.669440622s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:05:47.668952149 +0000 UTC m=+94.663780889" watchObservedRunningTime="2026-01-23 14:05:47.669440622 +0000 UTC m=+94.664269362" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.672689 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b8gsr" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.712986 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:47 crc kubenswrapper[4775]: E0123 14:05:47.713494 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.713061 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:47 crc kubenswrapper[4775]: E0123 14:05:47.713568 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.713004 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:47 crc kubenswrapper[4775]: E0123 14:05:47.713617 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.729240 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 03:03:24.456861603 +0000 UTC Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.729305 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.753298 4775 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.762452 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=76.762436371 podStartE2EDuration="1m16.762436371s" podCreationTimestamp="2026-01-23 14:04:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:05:47.761747673 +0000 UTC m=+94.756576413" watchObservedRunningTime="2026-01-23 14:05:47.762436371 +0000 UTC m=+94.757265111" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.763090 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=74.763083498 podStartE2EDuration="1m14.763083498s" podCreationTimestamp="2026-01-23 14:04:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:05:47.694121043 +0000 UTC m=+94.688949783" watchObservedRunningTime="2026-01-23 14:05:47.763083498 +0000 UTC m=+94.757912238" Jan 23 14:05:47 crc kubenswrapper[4775]: I0123 14:05:47.793829 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-dwmhf" podStartSLOduration=71.79379008 podStartE2EDuration="1m11.79379008s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:05:47.793227165 +0000 UTC m=+94.788055905" watchObservedRunningTime="2026-01-23 14:05:47.79379008 +0000 UTC m=+94.788618820" Jan 23 14:05:48 crc kubenswrapper[4775]: I0123 14:05:48.311786 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b8gsr" event={"ID":"02c12c8d-0376-46e2-9b11-42ffa6ee2a4d","Type":"ContainerStarted","Data":"80307ac3f605396aace0d3d0c7e0cd41138ac2811501716ee208e46be09c238b"} Jan 23 14:05:48 crc kubenswrapper[4775]: I0123 14:05:48.311890 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b8gsr" event={"ID":"02c12c8d-0376-46e2-9b11-42ffa6ee2a4d","Type":"ContainerStarted","Data":"2595841f56cbb803df15093f0988218f5395d5fbdfa7c2c80d3fc0ddddf2fd3e"} Jan 23 14:05:48 crc kubenswrapper[4775]: I0123 14:05:48.336958 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b8gsr" podStartSLOduration=72.336924297 podStartE2EDuration="1m12.336924297s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:05:48.335774527 +0000 UTC m=+95.330603307" watchObservedRunningTime="2026-01-23 14:05:48.336924297 +0000 UTC m=+95.331753087" Jan 23 14:05:48 crc kubenswrapper[4775]: I0123 14:05:48.713178 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:48 crc kubenswrapper[4775]: E0123 14:05:48.713325 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:48 crc kubenswrapper[4775]: I0123 14:05:48.714154 4775 scope.go:117] "RemoveContainer" containerID="705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157" Jan 23 14:05:48 crc kubenswrapper[4775]: E0123 14:05:48.714373 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qrvs8_openshift-ovn-kubernetes(bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" Jan 23 14:05:49 crc kubenswrapper[4775]: I0123 14:05:49.713102 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:49 crc kubenswrapper[4775]: I0123 14:05:49.713131 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:49 crc kubenswrapper[4775]: E0123 14:05:49.713282 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:49 crc kubenswrapper[4775]: E0123 14:05:49.713427 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:49 crc kubenswrapper[4775]: I0123 14:05:49.713557 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:49 crc kubenswrapper[4775]: E0123 14:05:49.713695 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:50 crc kubenswrapper[4775]: I0123 14:05:50.712983 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:50 crc kubenswrapper[4775]: E0123 14:05:50.713260 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:51 crc kubenswrapper[4775]: I0123 14:05:51.713957 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:51 crc kubenswrapper[4775]: E0123 14:05:51.714065 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:51 crc kubenswrapper[4775]: I0123 14:05:51.714139 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:51 crc kubenswrapper[4775]: I0123 14:05:51.714174 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:51 crc kubenswrapper[4775]: E0123 14:05:51.714364 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:51 crc kubenswrapper[4775]: E0123 14:05:51.714483 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:52 crc kubenswrapper[4775]: I0123 14:05:52.713597 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:52 crc kubenswrapper[4775]: E0123 14:05:52.713916 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:53 crc kubenswrapper[4775]: I0123 14:05:53.714184 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:53 crc kubenswrapper[4775]: I0123 14:05:53.714232 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:53 crc kubenswrapper[4775]: E0123 14:05:53.716588 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:53 crc kubenswrapper[4775]: I0123 14:05:53.716628 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:53 crc kubenswrapper[4775]: E0123 14:05:53.716938 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:53 crc kubenswrapper[4775]: E0123 14:05:53.717076 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:54 crc kubenswrapper[4775]: I0123 14:05:54.687084 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs\") pod \"network-metrics-daemon-47lz2\" (UID: \"63ed1a97-c97e-40d0-afdf-260c475dc83f\") " pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:54 crc kubenswrapper[4775]: E0123 14:05:54.687286 4775 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 14:05:54 crc kubenswrapper[4775]: E0123 14:05:54.687362 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs podName:63ed1a97-c97e-40d0-afdf-260c475dc83f nodeName:}" failed. No retries permitted until 2026-01-23 14:06:58.687343567 +0000 UTC m=+165.682172307 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs") pod "network-metrics-daemon-47lz2" (UID: "63ed1a97-c97e-40d0-afdf-260c475dc83f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 14:05:54 crc kubenswrapper[4775]: I0123 14:05:54.713389 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:54 crc kubenswrapper[4775]: E0123 14:05:54.713743 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:55 crc kubenswrapper[4775]: I0123 14:05:55.713480 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:55 crc kubenswrapper[4775]: I0123 14:05:55.713571 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:55 crc kubenswrapper[4775]: E0123 14:05:55.713617 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:55 crc kubenswrapper[4775]: E0123 14:05:55.713710 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:55 crc kubenswrapper[4775]: I0123 14:05:55.713502 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:55 crc kubenswrapper[4775]: E0123 14:05:55.713831 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:56 crc kubenswrapper[4775]: I0123 14:05:56.713971 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:56 crc kubenswrapper[4775]: E0123 14:05:56.714647 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:57 crc kubenswrapper[4775]: I0123 14:05:57.713718 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:57 crc kubenswrapper[4775]: I0123 14:05:57.713775 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:57 crc kubenswrapper[4775]: I0123 14:05:57.713779 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:57 crc kubenswrapper[4775]: E0123 14:05:57.713991 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:57 crc kubenswrapper[4775]: E0123 14:05:57.714076 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:57 crc kubenswrapper[4775]: E0123 14:05:57.714178 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:58 crc kubenswrapper[4775]: I0123 14:05:58.712996 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:05:58 crc kubenswrapper[4775]: E0123 14:05:58.713116 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:05:59 crc kubenswrapper[4775]: I0123 14:05:59.713028 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:05:59 crc kubenswrapper[4775]: I0123 14:05:59.713110 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:05:59 crc kubenswrapper[4775]: E0123 14:05:59.713442 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:05:59 crc kubenswrapper[4775]: I0123 14:05:59.713504 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:05:59 crc kubenswrapper[4775]: E0123 14:05:59.713680 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:05:59 crc kubenswrapper[4775]: E0123 14:05:59.714441 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:05:59 crc kubenswrapper[4775]: I0123 14:05:59.714950 4775 scope.go:117] "RemoveContainer" containerID="705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157" Jan 23 14:05:59 crc kubenswrapper[4775]: E0123 14:05:59.715269 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qrvs8_openshift-ovn-kubernetes(bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" Jan 23 14:06:00 crc kubenswrapper[4775]: I0123 14:06:00.713553 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:06:00 crc kubenswrapper[4775]: E0123 14:06:00.714160 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:06:01 crc kubenswrapper[4775]: I0123 14:06:01.713230 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:06:01 crc kubenswrapper[4775]: I0123 14:06:01.713230 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:06:01 crc kubenswrapper[4775]: I0123 14:06:01.713444 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:06:01 crc kubenswrapper[4775]: E0123 14:06:01.713668 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:06:01 crc kubenswrapper[4775]: E0123 14:06:01.713850 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:06:01 crc kubenswrapper[4775]: E0123 14:06:01.713915 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:06:02 crc kubenswrapper[4775]: I0123 14:06:02.713447 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:06:02 crc kubenswrapper[4775]: E0123 14:06:02.713658 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:06:03 crc kubenswrapper[4775]: I0123 14:06:03.713990 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:06:03 crc kubenswrapper[4775]: I0123 14:06:03.714044 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:06:03 crc kubenswrapper[4775]: I0123 14:06:03.714110 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:06:03 crc kubenswrapper[4775]: E0123 14:06:03.717475 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:06:03 crc kubenswrapper[4775]: E0123 14:06:03.717041 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:06:03 crc kubenswrapper[4775]: E0123 14:06:03.717631 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:06:04 crc kubenswrapper[4775]: I0123 14:06:04.714047 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:06:04 crc kubenswrapper[4775]: E0123 14:06:04.714239 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:06:05 crc kubenswrapper[4775]: I0123 14:06:05.713462 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:06:05 crc kubenswrapper[4775]: I0123 14:06:05.713548 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:06:05 crc kubenswrapper[4775]: I0123 14:06:05.713600 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:06:05 crc kubenswrapper[4775]: E0123 14:06:05.713713 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:06:05 crc kubenswrapper[4775]: E0123 14:06:05.713878 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:06:05 crc kubenswrapper[4775]: E0123 14:06:05.713997 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:06:06 crc kubenswrapper[4775]: I0123 14:06:06.713347 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:06:06 crc kubenswrapper[4775]: E0123 14:06:06.713606 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:06:07 crc kubenswrapper[4775]: I0123 14:06:07.713368 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:06:07 crc kubenswrapper[4775]: I0123 14:06:07.713503 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:06:07 crc kubenswrapper[4775]: E0123 14:06:07.713545 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:06:07 crc kubenswrapper[4775]: E0123 14:06:07.713769 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:06:07 crc kubenswrapper[4775]: I0123 14:06:07.714867 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:06:07 crc kubenswrapper[4775]: E0123 14:06:07.715252 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:06:08 crc kubenswrapper[4775]: I0123 14:06:08.713936 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:06:08 crc kubenswrapper[4775]: E0123 14:06:08.714144 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:06:09 crc kubenswrapper[4775]: I0123 14:06:09.382569 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hpxpf_ba4447c0-bada-49eb-b6b4-b25feff627a9/kube-multus/1.log" Jan 23 14:06:09 crc kubenswrapper[4775]: I0123 14:06:09.383303 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hpxpf_ba4447c0-bada-49eb-b6b4-b25feff627a9/kube-multus/0.log" Jan 23 14:06:09 crc kubenswrapper[4775]: I0123 14:06:09.383383 4775 generic.go:334] "Generic (PLEG): container finished" podID="ba4447c0-bada-49eb-b6b4-b25feff627a9" containerID="8f14be984531a60487db2daba36d9cba7f2bbafa8b8d68889c261f3b2260f058" exitCode=1 Jan 23 14:06:09 crc kubenswrapper[4775]: I0123 14:06:09.383442 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hpxpf" event={"ID":"ba4447c0-bada-49eb-b6b4-b25feff627a9","Type":"ContainerDied","Data":"8f14be984531a60487db2daba36d9cba7f2bbafa8b8d68889c261f3b2260f058"} Jan 23 14:06:09 crc kubenswrapper[4775]: I0123 14:06:09.383520 4775 scope.go:117] "RemoveContainer" containerID="d86240040433581231b56e95c58b11163ce88d021b71777160f214e388d271ec" Jan 23 14:06:09 crc kubenswrapper[4775]: I0123 14:06:09.384203 4775 scope.go:117] "RemoveContainer" containerID="8f14be984531a60487db2daba36d9cba7f2bbafa8b8d68889c261f3b2260f058" Jan 23 14:06:09 crc kubenswrapper[4775]: E0123 14:06:09.384460 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-hpxpf_openshift-multus(ba4447c0-bada-49eb-b6b4-b25feff627a9)\"" pod="openshift-multus/multus-hpxpf" podUID="ba4447c0-bada-49eb-b6b4-b25feff627a9" Jan 23 14:06:09 crc kubenswrapper[4775]: I0123 14:06:09.713100 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:06:09 crc kubenswrapper[4775]: I0123 14:06:09.713215 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:06:09 crc kubenswrapper[4775]: E0123 14:06:09.713295 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:06:09 crc kubenswrapper[4775]: I0123 14:06:09.713235 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:06:09 crc kubenswrapper[4775]: E0123 14:06:09.713351 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:06:09 crc kubenswrapper[4775]: E0123 14:06:09.713441 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:06:10 crc kubenswrapper[4775]: I0123 14:06:10.389199 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hpxpf_ba4447c0-bada-49eb-b6b4-b25feff627a9/kube-multus/1.log" Jan 23 14:06:10 crc kubenswrapper[4775]: I0123 14:06:10.712920 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:06:10 crc kubenswrapper[4775]: E0123 14:06:10.713075 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:06:11 crc kubenswrapper[4775]: I0123 14:06:11.714130 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:06:11 crc kubenswrapper[4775]: I0123 14:06:11.714207 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:06:11 crc kubenswrapper[4775]: I0123 14:06:11.714320 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:06:11 crc kubenswrapper[4775]: E0123 14:06:11.714311 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:06:11 crc kubenswrapper[4775]: E0123 14:06:11.714486 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:06:11 crc kubenswrapper[4775]: E0123 14:06:11.714562 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:06:11 crc kubenswrapper[4775]: I0123 14:06:11.715472 4775 scope.go:117] "RemoveContainer" containerID="705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157" Jan 23 14:06:11 crc kubenswrapper[4775]: E0123 14:06:11.715674 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qrvs8_openshift-ovn-kubernetes(bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" Jan 23 14:06:12 crc kubenswrapper[4775]: I0123 14:06:12.713082 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:06:12 crc kubenswrapper[4775]: E0123 14:06:12.713267 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:06:13 crc kubenswrapper[4775]: I0123 14:06:13.713264 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:06:13 crc kubenswrapper[4775]: E0123 14:06:13.714475 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:06:13 crc kubenswrapper[4775]: I0123 14:06:13.714515 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:06:13 crc kubenswrapper[4775]: I0123 14:06:13.714565 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:06:13 crc kubenswrapper[4775]: E0123 14:06:13.714646 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:06:13 crc kubenswrapper[4775]: E0123 14:06:13.714832 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:06:13 crc kubenswrapper[4775]: E0123 14:06:13.733737 4775 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 23 14:06:13 crc kubenswrapper[4775]: E0123 14:06:13.827421 4775 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 14:06:14 crc kubenswrapper[4775]: I0123 14:06:14.713789 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:06:14 crc kubenswrapper[4775]: E0123 14:06:14.714000 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:06:15 crc kubenswrapper[4775]: I0123 14:06:15.713678 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:06:15 crc kubenswrapper[4775]: I0123 14:06:15.713768 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:06:15 crc kubenswrapper[4775]: I0123 14:06:15.713684 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:06:15 crc kubenswrapper[4775]: E0123 14:06:15.713964 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:06:15 crc kubenswrapper[4775]: E0123 14:06:15.714101 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:06:15 crc kubenswrapper[4775]: E0123 14:06:15.714247 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:06:16 crc kubenswrapper[4775]: I0123 14:06:16.713716 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:06:16 crc kubenswrapper[4775]: E0123 14:06:16.713933 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:06:17 crc kubenswrapper[4775]: I0123 14:06:17.713788 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:06:17 crc kubenswrapper[4775]: I0123 14:06:17.713899 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:06:17 crc kubenswrapper[4775]: I0123 14:06:17.713818 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:06:17 crc kubenswrapper[4775]: E0123 14:06:17.714057 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:06:17 crc kubenswrapper[4775]: E0123 14:06:17.714221 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:06:17 crc kubenswrapper[4775]: E0123 14:06:17.714444 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:06:18 crc kubenswrapper[4775]: I0123 14:06:18.713668 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:06:18 crc kubenswrapper[4775]: E0123 14:06:18.713939 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:06:18 crc kubenswrapper[4775]: E0123 14:06:18.828649 4775 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 14:06:19 crc kubenswrapper[4775]: I0123 14:06:19.716134 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:06:19 crc kubenswrapper[4775]: I0123 14:06:19.716241 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:06:19 crc kubenswrapper[4775]: E0123 14:06:19.716272 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:06:19 crc kubenswrapper[4775]: E0123 14:06:19.716412 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:06:19 crc kubenswrapper[4775]: I0123 14:06:19.717031 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:06:19 crc kubenswrapper[4775]: E0123 14:06:19.717292 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:06:20 crc kubenswrapper[4775]: I0123 14:06:20.713894 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:06:20 crc kubenswrapper[4775]: E0123 14:06:20.714131 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:06:21 crc kubenswrapper[4775]: I0123 14:06:21.713398 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:06:21 crc kubenswrapper[4775]: I0123 14:06:21.713481 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:06:21 crc kubenswrapper[4775]: E0123 14:06:21.713581 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:06:21 crc kubenswrapper[4775]: E0123 14:06:21.713995 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:06:21 crc kubenswrapper[4775]: I0123 14:06:21.714206 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:06:21 crc kubenswrapper[4775]: E0123 14:06:21.714353 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:06:22 crc kubenswrapper[4775]: I0123 14:06:22.712922 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:06:22 crc kubenswrapper[4775]: E0123 14:06:22.713461 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:06:23 crc kubenswrapper[4775]: I0123 14:06:23.713494 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:06:23 crc kubenswrapper[4775]: I0123 14:06:23.713595 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:06:23 crc kubenswrapper[4775]: E0123 14:06:23.713715 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:06:23 crc kubenswrapper[4775]: I0123 14:06:23.717154 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:06:23 crc kubenswrapper[4775]: I0123 14:06:23.717485 4775 scope.go:117] "RemoveContainer" containerID="8f14be984531a60487db2daba36d9cba7f2bbafa8b8d68889c261f3b2260f058" Jan 23 14:06:23 crc kubenswrapper[4775]: E0123 14:06:23.717471 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:06:23 crc kubenswrapper[4775]: E0123 14:06:23.717593 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:06:23 crc kubenswrapper[4775]: I0123 14:06:23.720150 4775 scope.go:117] "RemoveContainer" containerID="705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157" Jan 23 14:06:23 crc kubenswrapper[4775]: E0123 14:06:23.829381 4775 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 14:06:24 crc kubenswrapper[4775]: I0123 14:06:24.441637 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hpxpf_ba4447c0-bada-49eb-b6b4-b25feff627a9/kube-multus/1.log" Jan 23 14:06:24 crc kubenswrapper[4775]: I0123 14:06:24.441758 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hpxpf" event={"ID":"ba4447c0-bada-49eb-b6b4-b25feff627a9","Type":"ContainerStarted","Data":"555e839180bbda237f6205ae573637b3ee9ad39df04b574cb5b7216b7c451510"} Jan 23 14:06:24 crc kubenswrapper[4775]: I0123 14:06:24.444051 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qrvs8_bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06/ovnkube-controller/3.log" Jan 23 14:06:24 crc kubenswrapper[4775]: I0123 14:06:24.447921 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerStarted","Data":"9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481"} Jan 23 14:06:24 crc kubenswrapper[4775]: I0123 14:06:24.448385 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:06:24 crc kubenswrapper[4775]: I0123 14:06:24.489535 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" podStartSLOduration=108.489513431 podStartE2EDuration="1m48.489513431s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:24.488469373 +0000 UTC m=+131.483298113" watchObservedRunningTime="2026-01-23 14:06:24.489513431 +0000 UTC m=+131.484342181" Jan 23 14:06:24 crc kubenswrapper[4775]: I0123 14:06:24.683223 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-47lz2"] Jan 23 14:06:24 crc kubenswrapper[4775]: I0123 14:06:24.683367 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:06:24 crc kubenswrapper[4775]: E0123 14:06:24.683491 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:06:25 crc kubenswrapper[4775]: I0123 14:06:25.713648 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:06:25 crc kubenswrapper[4775]: E0123 14:06:25.714150 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:06:25 crc kubenswrapper[4775]: I0123 14:06:25.713659 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:06:25 crc kubenswrapper[4775]: I0123 14:06:25.713739 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:06:25 crc kubenswrapper[4775]: E0123 14:06:25.714410 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:06:25 crc kubenswrapper[4775]: E0123 14:06:25.714563 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:06:26 crc kubenswrapper[4775]: I0123 14:06:26.713742 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:06:26 crc kubenswrapper[4775]: E0123 14:06:26.713913 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:06:27 crc kubenswrapper[4775]: I0123 14:06:27.713712 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:06:27 crc kubenswrapper[4775]: I0123 14:06:27.713860 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:06:27 crc kubenswrapper[4775]: E0123 14:06:27.713934 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 23 14:06:27 crc kubenswrapper[4775]: I0123 14:06:27.713993 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:06:27 crc kubenswrapper[4775]: E0123 14:06:27.714076 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 23 14:06:27 crc kubenswrapper[4775]: E0123 14:06:27.714199 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 23 14:06:28 crc kubenswrapper[4775]: I0123 14:06:28.713864 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:06:28 crc kubenswrapper[4775]: E0123 14:06:28.714157 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-47lz2" podUID="63ed1a97-c97e-40d0-afdf-260c475dc83f" Jan 23 14:06:29 crc kubenswrapper[4775]: I0123 14:06:29.713082 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:06:29 crc kubenswrapper[4775]: I0123 14:06:29.713098 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:06:29 crc kubenswrapper[4775]: I0123 14:06:29.713127 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:06:29 crc kubenswrapper[4775]: I0123 14:06:29.718318 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 23 14:06:29 crc kubenswrapper[4775]: I0123 14:06:29.718700 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 23 14:06:29 crc kubenswrapper[4775]: I0123 14:06:29.718954 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 23 14:06:29 crc kubenswrapper[4775]: I0123 14:06:29.719787 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 23 14:06:30 crc kubenswrapper[4775]: I0123 14:06:30.713955 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:06:30 crc kubenswrapper[4775]: I0123 14:06:30.717995 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 23 14:06:30 crc kubenswrapper[4775]: I0123 14:06:30.718797 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.226992 4775 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.283333 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-svb79"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.284261 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-svb79" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.286748 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-577dd"] Jan 23 14:06:38 crc kubenswrapper[4775]: W0123 14:06:38.286866 4775 reflector.go:561] object-"openshift-machine-api"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 23 14:06:38 crc kubenswrapper[4775]: E0123 14:06:38.286933 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.287784 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" Jan 23 14:06:38 crc kubenswrapper[4775]: W0123 14:06:38.288044 4775 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-tls": failed to list *v1.Secret: secrets "machine-api-operator-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 23 14:06:38 crc kubenswrapper[4775]: E0123 14:06:38.288105 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-api-operator-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 14:06:38 crc kubenswrapper[4775]: W0123 14:06:38.288485 4775 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7": failed to list *v1.Secret: secrets "machine-api-operator-dockercfg-mfbb7" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 23 14:06:38 crc kubenswrapper[4775]: E0123 14:06:38.288535 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-mfbb7\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-api-operator-dockercfg-mfbb7\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 14:06:38 crc kubenswrapper[4775]: W0123 14:06:38.288723 4775 reflector.go:561] object-"openshift-machine-api"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 23 14:06:38 crc kubenswrapper[4775]: E0123 14:06:38.288787 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 14:06:38 crc kubenswrapper[4775]: W0123 14:06:38.289903 4775 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-images": failed to list *v1.ConfigMap: configmaps "machine-api-operator-images" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 23 14:06:38 crc kubenswrapper[4775]: E0123 14:06:38.289957 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"machine-api-operator-images\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.291103 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-zbzw5"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.292110 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbzw5" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.293643 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qnhrq"] Jan 23 14:06:38 crc kubenswrapper[4775]: W0123 14:06:38.293947 4775 reflector.go:561] object-"openshift-authentication-operator"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 23 14:06:38 crc kubenswrapper[4775]: W0123 14:06:38.293987 4775 reflector.go:561] object-"openshift-authentication-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 23 14:06:38 crc kubenswrapper[4775]: E0123 14:06:38.294020 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 14:06:38 crc kubenswrapper[4775]: E0123 14:06:38.294075 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.294526 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qnhrq" Jan 23 14:06:38 crc kubenswrapper[4775]: W0123 14:06:38.295574 4775 reflector.go:561] object-"openshift-authentication-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 23 14:06:38 crc kubenswrapper[4775]: E0123 14:06:38.295644 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 14:06:38 crc kubenswrapper[4775]: W0123 14:06:38.295660 4775 reflector.go:561] object-"openshift-machine-api"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 23 14:06:38 crc kubenswrapper[4775]: E0123 14:06:38.295705 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 14:06:38 crc kubenswrapper[4775]: W0123 14:06:38.295779 4775 reflector.go:561] object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Jan 23 14:06:38 crc kubenswrapper[4775]: E0123 14:06:38.295868 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.297917 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-v2bx4"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.298762 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-mc4h4"] Jan 23 14:06:38 crc kubenswrapper[4775]: W0123 14:06:38.299390 4775 reflector.go:561] object-"openshift-authentication-operator"/"trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "trusted-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 23 14:06:38 crc kubenswrapper[4775]: E0123 14:06:38.299460 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 14:06:38 crc kubenswrapper[4775]: W0123 14:06:38.299407 4775 reflector.go:561] object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4": failed to list *v1.Secret: secrets "machine-approver-sa-dockercfg-nl2j4" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-cluster-machine-approver": no relationship found between node 'crc' and this object Jan 23 14:06:38 crc kubenswrapper[4775]: E0123 14:06:38.299518 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-nl2j4\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-approver-sa-dockercfg-nl2j4\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-cluster-machine-approver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 14:06:38 crc kubenswrapper[4775]: W0123 14:06:38.299563 4775 reflector.go:561] object-"openshift-authentication-operator"/"authentication-operator-config": failed to list *v1.ConfigMap: configmaps "authentication-operator-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 23 14:06:38 crc kubenswrapper[4775]: E0123 14:06:38.299620 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"authentication-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"authentication-operator-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.299757 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.299909 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.298791 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.300133 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.301341 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ddqcf"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.302408 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ddqcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.304958 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf"] Jan 23 14:06:38 crc kubenswrapper[4775]: W0123 14:06:38.305435 4775 reflector.go:561] object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj": failed to list *v1.Secret: secrets "authentication-operator-dockercfg-mz9bj" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 23 14:06:38 crc kubenswrapper[4775]: E0123 14:06:38.305504 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-mz9bj\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"authentication-operator-dockercfg-mz9bj\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.305603 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.305917 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.306640 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.306743 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.307532 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4q8mj"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.308384 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.309717 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.310102 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.310155 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.309848 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.309991 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.310598 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.310658 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.310711 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.310788 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.310929 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.311658 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.313862 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc9bh"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.314484 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-mvqcg"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.314997 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-mvqcg" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.315089 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc9bh" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.315323 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.315348 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.315570 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.318881 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-7gqzl"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.319495 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mm7b2"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.320061 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mm7b2" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.324723 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.325117 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.325215 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.325427 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.326442 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.326653 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.326758 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-7gqzl" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.326924 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.327225 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.329752 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.329923 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.330092 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.330192 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.329952 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-4dpv6"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.330836 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.331174 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.331373 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.331454 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.331850 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.332139 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.332310 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4dpv6" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.332415 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.332474 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.332650 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.333031 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.334362 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-fgb82"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.354054 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.354449 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.354613 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.355537 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.356034 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.356917 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-577dd"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.358893 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.360973 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.361213 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.361301 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.362659 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.364182 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.365660 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.365705 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.365864 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.365983 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.366099 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.366314 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.366455 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.366474 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.366496 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.366573 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.366617 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.366677 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.366713 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.366784 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.366824 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.366938 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.366994 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.367066 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.367111 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.367155 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.367197 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.367273 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.368602 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.369788 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.370843 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.370988 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.371217 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.371298 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-svb79"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.372503 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.372876 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.374049 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-mc4h4"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.374241 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.375366 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-v2bx4"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.376936 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.377862 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-xpzqz"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.378558 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xpzqz" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.380780 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-bjb9d"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.381016 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.381473 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-psxgx"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.382004 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-psxgx" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.381488 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.383202 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.383648 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.384955 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-nj2dd"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.399458 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.400645 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.401186 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.401362 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pmcq8"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.402040 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.402741 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.418822 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.419854 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-nj2dd" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.420084 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.420998 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xpwjl"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.421446 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-prjn9"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422176 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422236 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6js2\" (UniqueName: \"kubernetes.io/projected/3066d31d-92a4-45a7-b368-ba66d5689456-kube-api-access-p6js2\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422262 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85a9044b-9089-4a6a-87e6-06372c531aa9-config\") pod \"machine-api-operator-5694c8668f-svb79\" (UID: \"85a9044b-9089-4a6a-87e6-06372c531aa9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-svb79" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422280 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422331 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422347 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f38f7554-61cc-493f-8705-8da5f91d3926-service-ca-bundle\") pod \"authentication-operator-69f744f599-577dd\" (UID: \"f38f7554-61cc-493f-8705-8da5f91d3926\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422362 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6995952d-6d8a-494d-842c-1d5cf9ee1207-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-gc9bh\" (UID: \"6995952d-6d8a-494d-842c-1d5cf9ee1207\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc9bh" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422393 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422414 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6995952d-6d8a-494d-842c-1d5cf9ee1207-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-gc9bh\" (UID: \"6995952d-6d8a-494d-842c-1d5cf9ee1207\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc9bh" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422434 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a5c75370-d1c6-43bd-a8e8-8836ea5bdb22-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-ddqcf\" (UID: \"a5c75370-d1c6-43bd-a8e8-8836ea5bdb22\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ddqcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422465 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f3aab1c-726d-4027-b629-e04916bc4f8b-serving-cert\") pod \"controller-manager-879f6c89f-v2bx4\" (UID: \"1f3aab1c-726d-4027-b629-e04916bc4f8b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422487 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f38f7554-61cc-493f-8705-8da5f91d3926-config\") pod \"authentication-operator-69f744f599-577dd\" (UID: \"f38f7554-61cc-493f-8705-8da5f91d3926\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422502 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba896a24-e6f2-4480-807b-b3c5b6232cea-config\") pod \"console-operator-58897d9998-7gqzl\" (UID: \"ba896a24-e6f2-4480-807b-b3c5b6232cea\") " pod="openshift-console-operator/console-operator-58897d9998-7gqzl" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422517 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcvf9\" (UniqueName: \"kubernetes.io/projected/1f3aab1c-726d-4027-b629-e04916bc4f8b-kube-api-access-vcvf9\") pod \"controller-manager-879f6c89f-v2bx4\" (UID: \"1f3aab1c-726d-4027-b629-e04916bc4f8b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422553 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltngh\" (UniqueName: \"kubernetes.io/projected/f38f7554-61cc-493f-8705-8da5f91d3926-kube-api-access-ltngh\") pod \"authentication-operator-69f744f599-577dd\" (UID: \"f38f7554-61cc-493f-8705-8da5f91d3926\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422568 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422582 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/216b36e4-0e40-4073-9432-d1977dc6e03a-auth-proxy-config\") pod \"machine-approver-56656f9798-zbzw5\" (UID: \"216b36e4-0e40-4073-9432-d1977dc6e03a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbzw5" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422599 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t44w\" (UniqueName: \"kubernetes.io/projected/f9750de6-fc79-440e-8ad4-07acbe4edb49-kube-api-access-8t44w\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422634 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422648 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-config\") pod \"route-controller-manager-6576b87f9c-lqcpn\" (UID: \"a9a77e3c-0e93-45f9-ab81-7dfbd2916588\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422664 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5th22\" (UniqueName: \"kubernetes.io/projected/ba896a24-e6f2-4480-807b-b3c5b6232cea-kube-api-access-5th22\") pod \"console-operator-58897d9998-7gqzl\" (UID: \"ba896a24-e6f2-4480-807b-b3c5b6232cea\") " pod="openshift-console-operator/console-operator-58897d9998-7gqzl" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422695 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422711 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9750de6-fc79-440e-8ad4-07acbe4edb49-trusted-ca-bundle\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422725 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f9750de6-fc79-440e-8ad4-07acbe4edb49-encryption-config\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422738 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c575b767-e334-406f-849d-e562d70985fd-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422757 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smh4x\" (UniqueName: \"kubernetes.io/projected/c575b767-e334-406f-849d-e562d70985fd-kube-api-access-smh4x\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422786 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f9750de6-fc79-440e-8ad4-07acbe4edb49-image-import-ca\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422822 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f9750de6-fc79-440e-8ad4-07acbe4edb49-node-pullsecrets\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422839 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4bbc\" (UniqueName: \"kubernetes.io/projected/549e54fa-53eb-4a9d-9578-5cfbd02bb28d-kube-api-access-b4bbc\") pod \"openshift-apiserver-operator-796bbdcf4f-qnhrq\" (UID: \"549e54fa-53eb-4a9d-9578-5cfbd02bb28d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qnhrq" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422856 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/216b36e4-0e40-4073-9432-d1977dc6e03a-machine-approver-tls\") pod \"machine-approver-56656f9798-zbzw5\" (UID: \"216b36e4-0e40-4073-9432-d1977dc6e03a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbzw5" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422870 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f9750de6-fc79-440e-8ad4-07acbe4edb49-etcd-client\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422900 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f3aab1c-726d-4027-b629-e04916bc4f8b-client-ca\") pod \"controller-manager-879f6c89f-v2bx4\" (UID: \"1f3aab1c-726d-4027-b629-e04916bc4f8b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422916 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9750de6-fc79-440e-8ad4-07acbe4edb49-serving-cert\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422931 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/85a9044b-9089-4a6a-87e6-06372c531aa9-images\") pod \"machine-api-operator-5694c8668f-svb79\" (UID: \"85a9044b-9089-4a6a-87e6-06372c531aa9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-svb79" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422947 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/216b36e4-0e40-4073-9432-d1977dc6e03a-config\") pod \"machine-approver-56656f9798-zbzw5\" (UID: \"216b36e4-0e40-4073-9432-d1977dc6e03a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbzw5" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422979 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.422995 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk85x\" (UniqueName: \"kubernetes.io/projected/216b36e4-0e40-4073-9432-d1977dc6e03a-kube-api-access-kk85x\") pod \"machine-approver-56656f9798-zbzw5\" (UID: \"216b36e4-0e40-4073-9432-d1977dc6e03a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbzw5" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423011 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c575b767-e334-406f-849d-e562d70985fd-audit-dir\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423030 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f38f7554-61cc-493f-8705-8da5f91d3926-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-577dd\" (UID: \"f38f7554-61cc-493f-8705-8da5f91d3926\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423061 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/85a9044b-9089-4a6a-87e6-06372c531aa9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-svb79\" (UID: \"85a9044b-9089-4a6a-87e6-06372c531aa9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-svb79" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423077 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3066d31d-92a4-45a7-b368-ba66d5689456-audit-dir\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423095 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9750de6-fc79-440e-8ad4-07acbe4edb49-config\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423109 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-serving-cert\") pod \"route-controller-manager-6576b87f9c-lqcpn\" (UID: \"a9a77e3c-0e93-45f9-ab81-7dfbd2916588\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423142 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdjg2\" (UniqueName: \"kubernetes.io/projected/8ba1b8ce-8332-45c9-bfb0-9a1842dea009-kube-api-access-tdjg2\") pod \"downloads-7954f5f757-mvqcg\" (UID: \"8ba1b8ce-8332-45c9-bfb0-9a1842dea009\") " pod="openshift-console/downloads-7954f5f757-mvqcg" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423160 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423174 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f9750de6-fc79-440e-8ad4-07acbe4edb49-audit-dir\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423187 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f9750de6-fc79-440e-8ad4-07acbe4edb49-audit\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423216 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f3aab1c-726d-4027-b629-e04916bc4f8b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-v2bx4\" (UID: \"1f3aab1c-726d-4027-b629-e04916bc4f8b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423231 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba896a24-e6f2-4480-807b-b3c5b6232cea-serving-cert\") pod \"console-operator-58897d9998-7gqzl\" (UID: \"ba896a24-e6f2-4480-807b-b3c5b6232cea\") " pod="openshift-console-operator/console-operator-58897d9998-7gqzl" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423247 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zngzz\" (UniqueName: \"kubernetes.io/projected/a5c75370-d1c6-43bd-a8e8-8836ea5bdb22-kube-api-access-zngzz\") pod \"cluster-samples-operator-665b6dd947-ddqcf\" (UID: \"a5c75370-d1c6-43bd-a8e8-8836ea5bdb22\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ddqcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423263 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/549e54fa-53eb-4a9d-9578-5cfbd02bb28d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qnhrq\" (UID: \"549e54fa-53eb-4a9d-9578-5cfbd02bb28d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qnhrq" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423290 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ba896a24-e6f2-4480-807b-b3c5b6232cea-trusted-ca\") pod \"console-operator-58897d9998-7gqzl\" (UID: \"ba896a24-e6f2-4480-807b-b3c5b6232cea\") " pod="openshift-console-operator/console-operator-58897d9998-7gqzl" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423305 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-audit-policies\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423320 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c575b767-e334-406f-849d-e562d70985fd-encryption-config\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423365 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f38f7554-61cc-493f-8705-8da5f91d3926-serving-cert\") pod \"authentication-operator-69f744f599-577dd\" (UID: \"f38f7554-61cc-493f-8705-8da5f91d3926\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423381 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f9750de6-fc79-440e-8ad4-07acbe4edb49-etcd-serving-ca\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423397 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/549e54fa-53eb-4a9d-9578-5cfbd02bb28d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qnhrq\" (UID: \"549e54fa-53eb-4a9d-9578-5cfbd02bb28d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qnhrq" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423413 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-client-ca\") pod \"route-controller-manager-6576b87f9c-lqcpn\" (UID: \"a9a77e3c-0e93-45f9-ab81-7dfbd2916588\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423445 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c575b767-e334-406f-849d-e562d70985fd-serving-cert\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423460 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqqd4\" (UniqueName: \"kubernetes.io/projected/6995952d-6d8a-494d-842c-1d5cf9ee1207-kube-api-access-mqqd4\") pod \"openshift-controller-manager-operator-756b6f6bc6-gc9bh\" (UID: \"6995952d-6d8a-494d-842c-1d5cf9ee1207\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc9bh" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423476 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdgbr\" (UniqueName: \"kubernetes.io/projected/85a9044b-9089-4a6a-87e6-06372c531aa9-kube-api-access-rdgbr\") pod \"machine-api-operator-5694c8668f-svb79\" (UID: \"85a9044b-9089-4a6a-87e6-06372c531aa9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-svb79" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423491 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423521 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c575b767-e334-406f-849d-e562d70985fd-audit-policies\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423535 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c575b767-e334-406f-849d-e562d70985fd-etcd-client\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423549 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f3aab1c-726d-4027-b629-e04916bc4f8b-config\") pod \"controller-manager-879f6c89f-v2bx4\" (UID: \"1f3aab1c-726d-4027-b629-e04916bc4f8b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423577 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423608 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423622 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c575b767-e334-406f-849d-e562d70985fd-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.423637 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsv7w\" (UniqueName: \"kubernetes.io/projected/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-kube-api-access-rsv7w\") pod \"route-controller-manager-6576b87f9c-lqcpn\" (UID: \"a9a77e3c-0e93-45f9-ab81-7dfbd2916588\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.427027 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-c9x8w"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.427884 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prjn9" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.428038 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.428191 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-2lgz4"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.428613 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-2lgz4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.429395 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-f7z9k"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.430223 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-f7z9k" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.433426 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.433908 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.440463 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ddqcf"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.440507 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6fmtx"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.440997 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6fmtx" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.442521 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tss4"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.444096 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tss4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.448089 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-fmbdl"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.448732 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmbdl" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.448897 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.449503 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lssd6"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.450351 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lssd6" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.457324 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.458932 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-65w5f"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.459601 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-65w5f" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.460657 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rknc7"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.462017 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhzp8"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.462365 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhzp8" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.462532 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rknc7" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.463271 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2vnwm"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.463898 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2vnwm" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.464958 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-kmqrn"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.465689 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kmqrn" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.467866 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.475646 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-4dpv6"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.477319 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-br76j"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.478129 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.478519 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d74p6"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.478978 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d74p6" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.479234 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-br76j" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.479401 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.479871 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-btttg"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.480765 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-btttg" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.486949 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.496430 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.497726 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-mvqcg"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.499333 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qnhrq"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.502409 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc9bh"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.503577 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-psxgx"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.504618 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-7gqzl"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.506049 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.513522 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pmcq8"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.513925 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-2lgz4"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.517514 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.518072 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xpwjl"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.521196 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-f7z9k"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.525013 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f9750de6-fc79-440e-8ad4-07acbe4edb49-audit\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.526238 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f3aab1c-726d-4027-b629-e04916bc4f8b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-v2bx4\" (UID: \"1f3aab1c-726d-4027-b629-e04916bc4f8b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.527398 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba896a24-e6f2-4480-807b-b3c5b6232cea-serving-cert\") pod \"console-operator-58897d9998-7gqzl\" (UID: \"ba896a24-e6f2-4480-807b-b3c5b6232cea\") " pod="openshift-console-operator/console-operator-58897d9998-7gqzl" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.526362 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tss4"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.527279 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.527364 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f3aab1c-726d-4027-b629-e04916bc4f8b-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-v2bx4\" (UID: \"1f3aab1c-726d-4027-b629-e04916bc4f8b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.526183 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f9750de6-fc79-440e-8ad4-07acbe4edb49-audit\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.528126 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d6b6f17-bb56-49ba-8487-6e07346780a1-secret-volume\") pod \"collect-profiles-29486280-gf96b\" (UID: \"2d6b6f17-bb56-49ba-8487-6e07346780a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.528459 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zngzz\" (UniqueName: \"kubernetes.io/projected/a5c75370-d1c6-43bd-a8e8-8836ea5bdb22-kube-api-access-zngzz\") pod \"cluster-samples-operator-665b6dd947-ddqcf\" (UID: \"a5c75370-d1c6-43bd-a8e8-8836ea5bdb22\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ddqcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.528540 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/549e54fa-53eb-4a9d-9578-5cfbd02bb28d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qnhrq\" (UID: \"549e54fa-53eb-4a9d-9578-5cfbd02bb28d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qnhrq" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.528611 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ba896a24-e6f2-4480-807b-b3c5b6232cea-trusted-ca\") pod \"console-operator-58897d9998-7gqzl\" (UID: \"ba896a24-e6f2-4480-807b-b3c5b6232cea\") " pod="openshift-console-operator/console-operator-58897d9998-7gqzl" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.528682 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c575b767-e334-406f-849d-e562d70985fd-encryption-config\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.528754 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dbaf4876-b99e-4096-9f36-5c888312ddab-trusted-ca\") pod \"ingress-operator-5b745b69d9-xpzqz\" (UID: \"dbaf4876-b99e-4096-9f36-5c888312ddab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xpzqz" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.528844 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-audit-policies\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.528963 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f38f7554-61cc-493f-8705-8da5f91d3926-serving-cert\") pod \"authentication-operator-69f744f599-577dd\" (UID: \"f38f7554-61cc-493f-8705-8da5f91d3926\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.529396 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f9750de6-fc79-440e-8ad4-07acbe4edb49-etcd-serving-ca\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.529475 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/549e54fa-53eb-4a9d-9578-5cfbd02bb28d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qnhrq\" (UID: \"549e54fa-53eb-4a9d-9578-5cfbd02bb28d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qnhrq" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.529586 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-client-ca\") pod \"route-controller-manager-6576b87f9c-lqcpn\" (UID: \"a9a77e3c-0e93-45f9-ab81-7dfbd2916588\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.529658 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwv8t\" (UniqueName: \"kubernetes.io/projected/8ac48e42-bde7-4701-b994-825906603b06-kube-api-access-bwv8t\") pod \"marketplace-operator-79b997595-pmcq8\" (UID: \"8ac48e42-bde7-4701-b994-825906603b06\") " pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.529730 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc6b05de-2295-4c6a-8f11-367da8bdcf00-config\") pod \"etcd-operator-b45778765-bjb9d\" (UID: \"cc6b05de-2295-4c6a-8f11-367da8bdcf00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.529816 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c575b767-e334-406f-849d-e562d70985fd-serving-cert\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.529899 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqqd4\" (UniqueName: \"kubernetes.io/projected/6995952d-6d8a-494d-842c-1d5cf9ee1207-kube-api-access-mqqd4\") pod \"openshift-controller-manager-operator-756b6f6bc6-gc9bh\" (UID: \"6995952d-6d8a-494d-842c-1d5cf9ee1207\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc9bh" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.529972 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdgbr\" (UniqueName: \"kubernetes.io/projected/85a9044b-9089-4a6a-87e6-06372c531aa9-kube-api-access-rdgbr\") pod \"machine-api-operator-5694c8668f-svb79\" (UID: \"85a9044b-9089-4a6a-87e6-06372c531aa9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-svb79" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.530041 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d6b6f17-bb56-49ba-8487-6e07346780a1-config-volume\") pod \"collect-profiles-29486280-gf96b\" (UID: \"2d6b6f17-bb56-49ba-8487-6e07346780a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.530109 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc6b05de-2295-4c6a-8f11-367da8bdcf00-serving-cert\") pod \"etcd-operator-b45778765-bjb9d\" (UID: \"cc6b05de-2295-4c6a-8f11-367da8bdcf00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.530180 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.530253 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c575b767-e334-406f-849d-e562d70985fd-audit-policies\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.530314 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c575b767-e334-406f-849d-e562d70985fd-etcd-client\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.530379 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f3aab1c-726d-4027-b629-e04916bc4f8b-config\") pod \"controller-manager-879f6c89f-v2bx4\" (UID: \"1f3aab1c-726d-4027-b629-e04916bc4f8b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.530448 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ac48e42-bde7-4701-b994-825906603b06-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pmcq8\" (UID: \"8ac48e42-bde7-4701-b994-825906603b06\") " pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.530529 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.530602 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.530670 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsv7w\" (UniqueName: \"kubernetes.io/projected/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-kube-api-access-rsv7w\") pod \"route-controller-manager-6576b87f9c-lqcpn\" (UID: \"a9a77e3c-0e93-45f9-ab81-7dfbd2916588\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.530737 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n99rp\" (UniqueName: \"kubernetes.io/projected/2d6b6f17-bb56-49ba-8487-6e07346780a1-kube-api-access-n99rp\") pod \"collect-profiles-29486280-gf96b\" (UID: \"2d6b6f17-bb56-49ba-8487-6e07346780a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.530906 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c575b767-e334-406f-849d-e562d70985fd-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.530976 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85a9044b-9089-4a6a-87e6-06372c531aa9-config\") pod \"machine-api-operator-5694c8668f-svb79\" (UID: \"85a9044b-9089-4a6a-87e6-06372c531aa9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-svb79" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.531047 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k892r\" (UniqueName: \"kubernetes.io/projected/cc6b05de-2295-4c6a-8f11-367da8bdcf00-kube-api-access-k892r\") pod \"etcd-operator-b45778765-bjb9d\" (UID: \"cc6b05de-2295-4c6a-8f11-367da8bdcf00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.531121 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6js2\" (UniqueName: \"kubernetes.io/projected/3066d31d-92a4-45a7-b368-ba66d5689456-kube-api-access-p6js2\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.531213 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.531487 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.531567 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f38f7554-61cc-493f-8705-8da5f91d3926-service-ca-bundle\") pod \"authentication-operator-69f744f599-577dd\" (UID: \"f38f7554-61cc-493f-8705-8da5f91d3926\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.531635 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6995952d-6d8a-494d-842c-1d5cf9ee1207-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-gc9bh\" (UID: \"6995952d-6d8a-494d-842c-1d5cf9ee1207\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc9bh" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.531699 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dbaf4876-b99e-4096-9f36-5c888312ddab-bound-sa-token\") pod \"ingress-operator-5b745b69d9-xpzqz\" (UID: \"dbaf4876-b99e-4096-9f36-5c888312ddab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xpzqz" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.531773 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.531876 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6995952d-6d8a-494d-842c-1d5cf9ee1207-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-gc9bh\" (UID: \"6995952d-6d8a-494d-842c-1d5cf9ee1207\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc9bh" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.531949 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wclcs\" (UniqueName: \"kubernetes.io/projected/13e16abe-9325-4638-8b20-7195b7af8e68-kube-api-access-wclcs\") pod \"control-plane-machine-set-operator-78cbb6b69f-psxgx\" (UID: \"13e16abe-9325-4638-8b20-7195b7af8e68\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-psxgx" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.532027 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a5c75370-d1c6-43bd-a8e8-8836ea5bdb22-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-ddqcf\" (UID: \"a5c75370-d1c6-43bd-a8e8-8836ea5bdb22\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ddqcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.532092 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f3aab1c-726d-4027-b629-e04916bc4f8b-serving-cert\") pod \"controller-manager-879f6c89f-v2bx4\" (UID: \"1f3aab1c-726d-4027-b629-e04916bc4f8b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.532167 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f38f7554-61cc-493f-8705-8da5f91d3926-config\") pod \"authentication-operator-69f744f599-577dd\" (UID: \"f38f7554-61cc-493f-8705-8da5f91d3926\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.532233 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba896a24-e6f2-4480-807b-b3c5b6232cea-config\") pod \"console-operator-58897d9998-7gqzl\" (UID: \"ba896a24-e6f2-4480-807b-b3c5b6232cea\") " pod="openshift-console-operator/console-operator-58897d9998-7gqzl" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.532304 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcvf9\" (UniqueName: \"kubernetes.io/projected/1f3aab1c-726d-4027-b629-e04916bc4f8b-kube-api-access-vcvf9\") pod \"controller-manager-879f6c89f-v2bx4\" (UID: \"1f3aab1c-726d-4027-b629-e04916bc4f8b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.532368 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltngh\" (UniqueName: \"kubernetes.io/projected/f38f7554-61cc-493f-8705-8da5f91d3926-kube-api-access-ltngh\") pod \"authentication-operator-69f744f599-577dd\" (UID: \"f38f7554-61cc-493f-8705-8da5f91d3926\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.532443 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.532511 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/216b36e4-0e40-4073-9432-d1977dc6e03a-auth-proxy-config\") pod \"machine-approver-56656f9798-zbzw5\" (UID: \"216b36e4-0e40-4073-9432-d1977dc6e03a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbzw5" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.532577 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t44w\" (UniqueName: \"kubernetes.io/projected/f9750de6-fc79-440e-8ad4-07acbe4edb49-kube-api-access-8t44w\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.532653 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.532722 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-config\") pod \"route-controller-manager-6576b87f9c-lqcpn\" (UID: \"a9a77e3c-0e93-45f9-ab81-7dfbd2916588\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.532796 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/cc6b05de-2295-4c6a-8f11-367da8bdcf00-etcd-ca\") pod \"etcd-operator-b45778765-bjb9d\" (UID: \"cc6b05de-2295-4c6a-8f11-367da8bdcf00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.532914 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5th22\" (UniqueName: \"kubernetes.io/projected/ba896a24-e6f2-4480-807b-b3c5b6232cea-kube-api-access-5th22\") pod \"console-operator-58897d9998-7gqzl\" (UID: \"ba896a24-e6f2-4480-807b-b3c5b6232cea\") " pod="openshift-console-operator/console-operator-58897d9998-7gqzl" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.532981 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9782\" (UniqueName: \"kubernetes.io/projected/dbaf4876-b99e-4096-9f36-5c888312ddab-kube-api-access-h9782\") pod \"ingress-operator-5b745b69d9-xpzqz\" (UID: \"dbaf4876-b99e-4096-9f36-5c888312ddab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xpzqz" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.533058 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.533133 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9750de6-fc79-440e-8ad4-07acbe4edb49-trusted-ca-bundle\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.533197 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f9750de6-fc79-440e-8ad4-07acbe4edb49-encryption-config\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.533259 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c575b767-e334-406f-849d-e562d70985fd-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.533328 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smh4x\" (UniqueName: \"kubernetes.io/projected/c575b767-e334-406f-849d-e562d70985fd-kube-api-access-smh4x\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.533393 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/549e54fa-53eb-4a9d-9578-5cfbd02bb28d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-qnhrq\" (UID: \"549e54fa-53eb-4a9d-9578-5cfbd02bb28d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qnhrq" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.533397 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f9750de6-fc79-440e-8ad4-07acbe4edb49-image-import-ca\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.533450 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.533462 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cc6b05de-2295-4c6a-8f11-367da8bdcf00-etcd-client\") pod \"etcd-operator-b45778765-bjb9d\" (UID: \"cc6b05de-2295-4c6a-8f11-367da8bdcf00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.533520 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f9750de6-fc79-440e-8ad4-07acbe4edb49-node-pullsecrets\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.533541 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4bbc\" (UniqueName: \"kubernetes.io/projected/549e54fa-53eb-4a9d-9578-5cfbd02bb28d-kube-api-access-b4bbc\") pod \"openshift-apiserver-operator-796bbdcf4f-qnhrq\" (UID: \"549e54fa-53eb-4a9d-9578-5cfbd02bb28d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qnhrq" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.533568 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/216b36e4-0e40-4073-9432-d1977dc6e03a-machine-approver-tls\") pod \"machine-approver-56656f9798-zbzw5\" (UID: \"216b36e4-0e40-4073-9432-d1977dc6e03a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbzw5" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.533585 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f9750de6-fc79-440e-8ad4-07acbe4edb49-etcd-client\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.533602 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f3aab1c-726d-4027-b629-e04916bc4f8b-client-ca\") pod \"controller-manager-879f6c89f-v2bx4\" (UID: \"1f3aab1c-726d-4027-b629-e04916bc4f8b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.533623 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/216b36e4-0e40-4073-9432-d1977dc6e03a-config\") pod \"machine-approver-56656f9798-zbzw5\" (UID: \"216b36e4-0e40-4073-9432-d1977dc6e03a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbzw5" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.533641 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9750de6-fc79-440e-8ad4-07acbe4edb49-serving-cert\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.533642 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f9750de6-fc79-440e-8ad4-07acbe4edb49-etcd-serving-ca\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.533665 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/85a9044b-9089-4a6a-87e6-06372c531aa9-images\") pod \"machine-api-operator-5694c8668f-svb79\" (UID: \"85a9044b-9089-4a6a-87e6-06372c531aa9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-svb79" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.533704 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dbaf4876-b99e-4096-9f36-5c888312ddab-metrics-tls\") pod \"ingress-operator-5b745b69d9-xpzqz\" (UID: \"dbaf4876-b99e-4096-9f36-5c888312ddab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xpzqz" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.530981 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ba896a24-e6f2-4480-807b-b3c5b6232cea-trusted-ca\") pod \"console-operator-58897d9998-7gqzl\" (UID: \"ba896a24-e6f2-4480-807b-b3c5b6232cea\") " pod="openshift-console-operator/console-operator-58897d9998-7gqzl" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.533738 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/13e16abe-9325-4638-8b20-7195b7af8e68-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-psxgx\" (UID: \"13e16abe-9325-4638-8b20-7195b7af8e68\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-psxgx" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.533985 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c575b767-e334-406f-849d-e562d70985fd-audit-policies\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.529695 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lssd6"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.534441 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mm7b2"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.534458 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-fmbdl"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.534551 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.534640 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c575b767-e334-406f-849d-e562d70985fd-encryption-config\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.534691 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-prjn9"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.534794 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/549e54fa-53eb-4a9d-9578-5cfbd02bb28d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-qnhrq\" (UID: \"549e54fa-53eb-4a9d-9578-5cfbd02bb28d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qnhrq" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.535211 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.535847 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/216b36e4-0e40-4073-9432-d1977dc6e03a-auth-proxy-config\") pod \"machine-approver-56656f9798-zbzw5\" (UID: \"216b36e4-0e40-4073-9432-d1977dc6e03a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbzw5" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.535857 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f3aab1c-726d-4027-b629-e04916bc4f8b-config\") pod \"controller-manager-879f6c89f-v2bx4\" (UID: \"1f3aab1c-726d-4027-b629-e04916bc4f8b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.535957 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-audit-policies\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.536261 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.536567 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f3aab1c-726d-4027-b629-e04916bc4f8b-client-ca\") pod \"controller-manager-879f6c89f-v2bx4\" (UID: \"1f3aab1c-726d-4027-b629-e04916bc4f8b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.536680 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6995952d-6d8a-494d-842c-1d5cf9ee1207-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-gc9bh\" (UID: \"6995952d-6d8a-494d-842c-1d5cf9ee1207\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc9bh" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.536814 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.536888 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f38f7554-61cc-493f-8705-8da5f91d3926-service-ca-bundle\") pod \"authentication-operator-69f744f599-577dd\" (UID: \"f38f7554-61cc-493f-8705-8da5f91d3926\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.536960 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk85x\" (UniqueName: \"kubernetes.io/projected/216b36e4-0e40-4073-9432-d1977dc6e03a-kube-api-access-kk85x\") pod \"machine-approver-56656f9798-zbzw5\" (UID: \"216b36e4-0e40-4073-9432-d1977dc6e03a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbzw5" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.536991 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c575b767-e334-406f-849d-e562d70985fd-audit-dir\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.537010 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/cc6b05de-2295-4c6a-8f11-367da8bdcf00-etcd-service-ca\") pod \"etcd-operator-b45778765-bjb9d\" (UID: \"cc6b05de-2295-4c6a-8f11-367da8bdcf00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.537031 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f38f7554-61cc-493f-8705-8da5f91d3926-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-577dd\" (UID: \"f38f7554-61cc-493f-8705-8da5f91d3926\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.537053 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/85a9044b-9089-4a6a-87e6-06372c531aa9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-svb79\" (UID: \"85a9044b-9089-4a6a-87e6-06372c531aa9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-svb79" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.537078 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8ac48e42-bde7-4701-b994-825906603b06-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pmcq8\" (UID: \"8ac48e42-bde7-4701-b994-825906603b06\") " pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.537100 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3066d31d-92a4-45a7-b368-ba66d5689456-audit-dir\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.537118 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9750de6-fc79-440e-8ad4-07acbe4edb49-config\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.537141 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-serving-cert\") pod \"route-controller-manager-6576b87f9c-lqcpn\" (UID: \"a9a77e3c-0e93-45f9-ab81-7dfbd2916588\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.537615 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-config\") pod \"route-controller-manager-6576b87f9c-lqcpn\" (UID: \"a9a77e3c-0e93-45f9-ab81-7dfbd2916588\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.537649 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4q8mj"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.537665 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-fgb82"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.537686 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-client-ca\") pod \"route-controller-manager-6576b87f9c-lqcpn\" (UID: \"a9a77e3c-0e93-45f9-ab81-7dfbd2916588\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.538010 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f9750de6-fc79-440e-8ad4-07acbe4edb49-node-pullsecrets\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.538084 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c575b767-e334-406f-849d-e562d70985fd-audit-dir\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.538540 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.538586 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.538731 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f9750de6-fc79-440e-8ad4-07acbe4edb49-image-import-ca\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.539067 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9750de6-fc79-440e-8ad4-07acbe4edb49-config\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.539105 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3066d31d-92a4-45a7-b368-ba66d5689456-audit-dir\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.539333 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c575b767-e334-406f-849d-e562d70985fd-etcd-client\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.539345 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.539371 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c575b767-e334-406f-849d-e562d70985fd-serving-cert\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.539383 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.539409 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdjg2\" (UniqueName: \"kubernetes.io/projected/8ba1b8ce-8332-45c9-bfb0-9a1842dea009-kube-api-access-tdjg2\") pod \"downloads-7954f5f757-mvqcg\" (UID: \"8ba1b8ce-8332-45c9-bfb0-9a1842dea009\") " pod="openshift-console/downloads-7954f5f757-mvqcg" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.539435 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.539463 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f9750de6-fc79-440e-8ad4-07acbe4edb49-audit-dir\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.539528 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f9750de6-fc79-440e-8ad4-07acbe4edb49-audit-dir\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.539771 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/216b36e4-0e40-4073-9432-d1977dc6e03a-machine-approver-tls\") pod \"machine-approver-56656f9798-zbzw5\" (UID: \"216b36e4-0e40-4073-9432-d1977dc6e03a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbzw5" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.540109 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f9750de6-fc79-440e-8ad4-07acbe4edb49-etcd-client\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.540428 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c575b767-e334-406f-849d-e562d70985fd-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.540509 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6fmtx"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.540578 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9750de6-fc79-440e-8ad4-07acbe4edb49-trusted-ca-bundle\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.541004 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/216b36e4-0e40-4073-9432-d1977dc6e03a-config\") pod \"machine-approver-56656f9798-zbzw5\" (UID: \"216b36e4-0e40-4073-9432-d1977dc6e03a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbzw5" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.541398 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9750de6-fc79-440e-8ad4-07acbe4edb49-serving-cert\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.541460 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-xpzqz"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.541488 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c575b767-e334-406f-849d-e562d70985fd-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.541553 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba896a24-e6f2-4480-807b-b3c5b6232cea-config\") pod \"console-operator-58897d9998-7gqzl\" (UID: \"ba896a24-e6f2-4480-807b-b3c5b6232cea\") " pod="openshift-console-operator/console-operator-58897d9998-7gqzl" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.542300 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-m5nll"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.542453 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.542456 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.542461 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.543029 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f9750de6-fc79-440e-8ad4-07acbe4edb49-encryption-config\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.543305 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-m5nll" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.543488 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-serving-cert\") pod \"route-controller-manager-6576b87f9c-lqcpn\" (UID: \"a9a77e3c-0e93-45f9-ab81-7dfbd2916588\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.543521 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-bvqqf"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.543953 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6995952d-6d8a-494d-842c-1d5cf9ee1207-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-gc9bh\" (UID: \"6995952d-6d8a-494d-842c-1d5cf9ee1207\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc9bh" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.544077 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-bvqqf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.544323 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-c9x8w"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.545357 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-bjb9d"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.545658 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a5c75370-d1c6-43bd-a8e8-8836ea5bdb22-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-ddqcf\" (UID: \"a5c75370-d1c6-43bd-a8e8-8836ea5bdb22\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ddqcf" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.546326 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.546408 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rknc7"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.547425 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2vnwm"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.548442 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhzp8"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.549431 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-m5nll"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.550513 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d74p6"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.550934 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.551482 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f3aab1c-726d-4027-b629-e04916bc4f8b-serving-cert\") pod \"controller-manager-879f6c89f-v2bx4\" (UID: \"1f3aab1c-726d-4027-b629-e04916bc4f8b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.551544 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.552549 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-65w5f"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.553529 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-br76j"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.554538 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-btttg"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.555588 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-bvqqf"] Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.559362 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba896a24-e6f2-4480-807b-b3c5b6232cea-serving-cert\") pod \"console-operator-58897d9998-7gqzl\" (UID: \"ba896a24-e6f2-4480-807b-b3c5b6232cea\") " pod="openshift-console-operator/console-operator-58897d9998-7gqzl" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.567146 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.586554 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.607178 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.626874 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.639949 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d6b6f17-bb56-49ba-8487-6e07346780a1-secret-volume\") pod \"collect-profiles-29486280-gf96b\" (UID: \"2d6b6f17-bb56-49ba-8487-6e07346780a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.639992 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dbaf4876-b99e-4096-9f36-5c888312ddab-trusted-ca\") pod \"ingress-operator-5b745b69d9-xpzqz\" (UID: \"dbaf4876-b99e-4096-9f36-5c888312ddab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xpzqz" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.640018 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwv8t\" (UniqueName: \"kubernetes.io/projected/8ac48e42-bde7-4701-b994-825906603b06-kube-api-access-bwv8t\") pod \"marketplace-operator-79b997595-pmcq8\" (UID: \"8ac48e42-bde7-4701-b994-825906603b06\") " pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.640037 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc6b05de-2295-4c6a-8f11-367da8bdcf00-config\") pod \"etcd-operator-b45778765-bjb9d\" (UID: \"cc6b05de-2295-4c6a-8f11-367da8bdcf00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.640062 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc6b05de-2295-4c6a-8f11-367da8bdcf00-serving-cert\") pod \"etcd-operator-b45778765-bjb9d\" (UID: \"cc6b05de-2295-4c6a-8f11-367da8bdcf00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.640081 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d6b6f17-bb56-49ba-8487-6e07346780a1-config-volume\") pod \"collect-profiles-29486280-gf96b\" (UID: \"2d6b6f17-bb56-49ba-8487-6e07346780a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.640612 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ac48e42-bde7-4701-b994-825906603b06-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pmcq8\" (UID: \"8ac48e42-bde7-4701-b994-825906603b06\") " pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.640847 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n99rp\" (UniqueName: \"kubernetes.io/projected/2d6b6f17-bb56-49ba-8487-6e07346780a1-kube-api-access-n99rp\") pod \"collect-profiles-29486280-gf96b\" (UID: \"2d6b6f17-bb56-49ba-8487-6e07346780a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.640931 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k892r\" (UniqueName: \"kubernetes.io/projected/cc6b05de-2295-4c6a-8f11-367da8bdcf00-kube-api-access-k892r\") pod \"etcd-operator-b45778765-bjb9d\" (UID: \"cc6b05de-2295-4c6a-8f11-367da8bdcf00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.641000 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dbaf4876-b99e-4096-9f36-5c888312ddab-bound-sa-token\") pod \"ingress-operator-5b745b69d9-xpzqz\" (UID: \"dbaf4876-b99e-4096-9f36-5c888312ddab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xpzqz" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.641072 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wclcs\" (UniqueName: \"kubernetes.io/projected/13e16abe-9325-4638-8b20-7195b7af8e68-kube-api-access-wclcs\") pod \"control-plane-machine-set-operator-78cbb6b69f-psxgx\" (UID: \"13e16abe-9325-4638-8b20-7195b7af8e68\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-psxgx" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.641217 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/cc6b05de-2295-4c6a-8f11-367da8bdcf00-etcd-ca\") pod \"etcd-operator-b45778765-bjb9d\" (UID: \"cc6b05de-2295-4c6a-8f11-367da8bdcf00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.641269 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9782\" (UniqueName: \"kubernetes.io/projected/dbaf4876-b99e-4096-9f36-5c888312ddab-kube-api-access-h9782\") pod \"ingress-operator-5b745b69d9-xpzqz\" (UID: \"dbaf4876-b99e-4096-9f36-5c888312ddab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xpzqz" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.641332 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cc6b05de-2295-4c6a-8f11-367da8bdcf00-etcd-client\") pod \"etcd-operator-b45778765-bjb9d\" (UID: \"cc6b05de-2295-4c6a-8f11-367da8bdcf00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.641398 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dbaf4876-b99e-4096-9f36-5c888312ddab-metrics-tls\") pod \"ingress-operator-5b745b69d9-xpzqz\" (UID: \"dbaf4876-b99e-4096-9f36-5c888312ddab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xpzqz" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.641439 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/13e16abe-9325-4638-8b20-7195b7af8e68-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-psxgx\" (UID: \"13e16abe-9325-4638-8b20-7195b7af8e68\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-psxgx" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.641490 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/cc6b05de-2295-4c6a-8f11-367da8bdcf00-etcd-service-ca\") pod \"etcd-operator-b45778765-bjb9d\" (UID: \"cc6b05de-2295-4c6a-8f11-367da8bdcf00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.641548 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8ac48e42-bde7-4701-b994-825906603b06-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pmcq8\" (UID: \"8ac48e42-bde7-4701-b994-825906603b06\") " pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.643570 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dbaf4876-b99e-4096-9f36-5c888312ddab-trusted-ca\") pod \"ingress-operator-5b745b69d9-xpzqz\" (UID: \"dbaf4876-b99e-4096-9f36-5c888312ddab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xpzqz" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.646320 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.646878 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/13e16abe-9325-4638-8b20-7195b7af8e68-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-psxgx\" (UID: \"13e16abe-9325-4638-8b20-7195b7af8e68\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-psxgx" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.647575 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc6b05de-2295-4c6a-8f11-367da8bdcf00-serving-cert\") pod \"etcd-operator-b45778765-bjb9d\" (UID: \"cc6b05de-2295-4c6a-8f11-367da8bdcf00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.649525 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dbaf4876-b99e-4096-9f36-5c888312ddab-metrics-tls\") pod \"ingress-operator-5b745b69d9-xpzqz\" (UID: \"dbaf4876-b99e-4096-9f36-5c888312ddab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xpzqz" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.656917 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cc6b05de-2295-4c6a-8f11-367da8bdcf00-etcd-client\") pod \"etcd-operator-b45778765-bjb9d\" (UID: \"cc6b05de-2295-4c6a-8f11-367da8bdcf00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.667284 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.675112 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc6b05de-2295-4c6a-8f11-367da8bdcf00-config\") pod \"etcd-operator-b45778765-bjb9d\" (UID: \"cc6b05de-2295-4c6a-8f11-367da8bdcf00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.687086 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.695206 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/cc6b05de-2295-4c6a-8f11-367da8bdcf00-etcd-ca\") pod \"etcd-operator-b45778765-bjb9d\" (UID: \"cc6b05de-2295-4c6a-8f11-367da8bdcf00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.707509 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.727696 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.735137 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/cc6b05de-2295-4c6a-8f11-367da8bdcf00-etcd-service-ca\") pod \"etcd-operator-b45778765-bjb9d\" (UID: \"cc6b05de-2295-4c6a-8f11-367da8bdcf00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.746656 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.767276 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.775784 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d6b6f17-bb56-49ba-8487-6e07346780a1-config-volume\") pod \"collect-profiles-29486280-gf96b\" (UID: \"2d6b6f17-bb56-49ba-8487-6e07346780a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.787946 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.798599 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d6b6f17-bb56-49ba-8487-6e07346780a1-secret-volume\") pod \"collect-profiles-29486280-gf96b\" (UID: \"2d6b6f17-bb56-49ba-8487-6e07346780a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.808151 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.827168 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.847596 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.866712 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.887156 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.897355 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8ac48e42-bde7-4701-b994-825906603b06-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-pmcq8\" (UID: \"8ac48e42-bde7-4701-b994-825906603b06\") " pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.914999 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.925679 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ac48e42-bde7-4701-b994-825906603b06-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-pmcq8\" (UID: \"8ac48e42-bde7-4701-b994-825906603b06\") " pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.926598 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.946883 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.967546 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 23 14:06:38 crc kubenswrapper[4775]: I0123 14:06:38.986443 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.006954 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.026352 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.047129 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.067162 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.088189 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.106291 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.126708 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.146323 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.167173 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.187322 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.206497 4775 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.227746 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.247146 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.267169 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.287457 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.327379 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.346873 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.366199 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.386484 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.426868 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.444889 4775 request.go:700] Waited for 1.003387183s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/secrets?fieldSelector=metadata.name%3Dkube-controller-manager-operator-dockercfg-gkqpw&limit=500&resourceVersion=0 Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.446742 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.467864 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.487117 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.507127 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.526386 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 23 14:06:39 crc kubenswrapper[4775]: E0123 14:06:39.530060 4775 secret.go:188] Couldn't get secret openshift-authentication-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 23 14:06:39 crc kubenswrapper[4775]: E0123 14:06:39.530156 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f38f7554-61cc-493f-8705-8da5f91d3926-serving-cert podName:f38f7554-61cc-493f-8705-8da5f91d3926 nodeName:}" failed. No retries permitted until 2026-01-23 14:06:40.030127591 +0000 UTC m=+147.024956361 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f38f7554-61cc-493f-8705-8da5f91d3926-serving-cert") pod "authentication-operator-69f744f599-577dd" (UID: "f38f7554-61cc-493f-8705-8da5f91d3926") : failed to sync secret cache: timed out waiting for the condition Jan 23 14:06:39 crc kubenswrapper[4775]: E0123 14:06:39.533500 4775 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:06:39 crc kubenswrapper[4775]: E0123 14:06:39.533616 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/85a9044b-9089-4a6a-87e6-06372c531aa9-config podName:85a9044b-9089-4a6a-87e6-06372c531aa9 nodeName:}" failed. No retries permitted until 2026-01-23 14:06:40.033586934 +0000 UTC m=+147.028415704 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/85a9044b-9089-4a6a-87e6-06372c531aa9-config") pod "machine-api-operator-5694c8668f-svb79" (UID: "85a9044b-9089-4a6a-87e6-06372c531aa9") : failed to sync configmap cache: timed out waiting for the condition Jan 23 14:06:39 crc kubenswrapper[4775]: E0123 14:06:39.534791 4775 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:06:39 crc kubenswrapper[4775]: E0123 14:06:39.534903 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/85a9044b-9089-4a6a-87e6-06372c531aa9-images podName:85a9044b-9089-4a6a-87e6-06372c531aa9 nodeName:}" failed. No retries permitted until 2026-01-23 14:06:40.034877973 +0000 UTC m=+147.029706803 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/85a9044b-9089-4a6a-87e6-06372c531aa9-images") pod "machine-api-operator-5694c8668f-svb79" (UID: "85a9044b-9089-4a6a-87e6-06372c531aa9") : failed to sync configmap cache: timed out waiting for the condition Jan 23 14:06:39 crc kubenswrapper[4775]: E0123 14:06:39.543350 4775 configmap.go:193] Couldn't get configMap openshift-authentication-operator/authentication-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:06:39 crc kubenswrapper[4775]: E0123 14:06:39.543367 4775 secret.go:188] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Jan 23 14:06:39 crc kubenswrapper[4775]: E0123 14:06:39.543406 4775 configmap.go:193] Couldn't get configMap openshift-authentication-operator/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:06:39 crc kubenswrapper[4775]: E0123 14:06:39.543419 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f38f7554-61cc-493f-8705-8da5f91d3926-config podName:f38f7554-61cc-493f-8705-8da5f91d3926 nodeName:}" failed. No retries permitted until 2026-01-23 14:06:40.043401937 +0000 UTC m=+147.038230697 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/f38f7554-61cc-493f-8705-8da5f91d3926-config") pod "authentication-operator-69f744f599-577dd" (UID: "f38f7554-61cc-493f-8705-8da5f91d3926") : failed to sync configmap cache: timed out waiting for the condition Jan 23 14:06:39 crc kubenswrapper[4775]: E0123 14:06:39.543490 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/85a9044b-9089-4a6a-87e6-06372c531aa9-machine-api-operator-tls podName:85a9044b-9089-4a6a-87e6-06372c531aa9 nodeName:}" failed. No retries permitted until 2026-01-23 14:06:40.043470889 +0000 UTC m=+147.038299659 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/85a9044b-9089-4a6a-87e6-06372c531aa9-machine-api-operator-tls") pod "machine-api-operator-5694c8668f-svb79" (UID: "85a9044b-9089-4a6a-87e6-06372c531aa9") : failed to sync secret cache: timed out waiting for the condition Jan 23 14:06:39 crc kubenswrapper[4775]: E0123 14:06:39.543512 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f38f7554-61cc-493f-8705-8da5f91d3926-trusted-ca-bundle podName:f38f7554-61cc-493f-8705-8da5f91d3926 nodeName:}" failed. No retries permitted until 2026-01-23 14:06:40.04350108 +0000 UTC m=+147.038329850 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/f38f7554-61cc-493f-8705-8da5f91d3926-trusted-ca-bundle") pod "authentication-operator-69f744f599-577dd" (UID: "f38f7554-61cc-493f-8705-8da5f91d3926") : failed to sync configmap cache: timed out waiting for the condition Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.547586 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.565837 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.587306 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.606001 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.626937 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.647537 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.665995 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.686594 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.706478 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.726265 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.747423 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.758493 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:39 crc kubenswrapper[4775]: E0123 14:06:39.758618 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:08:41.758589884 +0000 UTC m=+268.753418664 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.758793 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.758891 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.758923 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.758963 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.759780 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.762477 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.764156 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.764628 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.766600 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.787212 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.807325 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.826750 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.847118 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.866868 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.886988 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.906414 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.927691 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.938665 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.947301 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.958637 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.966877 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.967976 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:06:39 crc kubenswrapper[4775]: I0123 14:06:39.987345 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.009390 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.028903 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.048196 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.098717 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f38f7554-61cc-493f-8705-8da5f91d3926-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-577dd\" (UID: \"f38f7554-61cc-493f-8705-8da5f91d3926\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.098939 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/85a9044b-9089-4a6a-87e6-06372c531aa9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-svb79\" (UID: \"85a9044b-9089-4a6a-87e6-06372c531aa9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-svb79" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.098988 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f38f7554-61cc-493f-8705-8da5f91d3926-serving-cert\") pod \"authentication-operator-69f744f599-577dd\" (UID: \"f38f7554-61cc-493f-8705-8da5f91d3926\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.099040 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85a9044b-9089-4a6a-87e6-06372c531aa9-config\") pod \"machine-api-operator-5694c8668f-svb79\" (UID: \"85a9044b-9089-4a6a-87e6-06372c531aa9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-svb79" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.099084 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f38f7554-61cc-493f-8705-8da5f91d3926-config\") pod \"authentication-operator-69f744f599-577dd\" (UID: \"f38f7554-61cc-493f-8705-8da5f91d3926\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.099152 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/85a9044b-9089-4a6a-87e6-06372c531aa9-images\") pod \"machine-api-operator-5694c8668f-svb79\" (UID: \"85a9044b-9089-4a6a-87e6-06372c531aa9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-svb79" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.100046 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.102203 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.106326 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.127573 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.147745 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.166629 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.186126 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.212192 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 23 14:06:40 crc kubenswrapper[4775]: W0123 14:06:40.215921 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-8ddf0268ebdc3fc0acc844a9e2c036935d9f6efb1c5ce9c49a7c74146aae22ed WatchSource:0}: Error finding container 8ddf0268ebdc3fc0acc844a9e2c036935d9f6efb1c5ce9c49a7c74146aae22ed: Status 404 returned error can't find the container with id 8ddf0268ebdc3fc0acc844a9e2c036935d9f6efb1c5ce9c49a7c74146aae22ed Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.247786 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zngzz\" (UniqueName: \"kubernetes.io/projected/a5c75370-d1c6-43bd-a8e8-8836ea5bdb22-kube-api-access-zngzz\") pod \"cluster-samples-operator-665b6dd947-ddqcf\" (UID: \"a5c75370-d1c6-43bd-a8e8-8836ea5bdb22\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ddqcf" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.259299 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6js2\" (UniqueName: \"kubernetes.io/projected/3066d31d-92a4-45a7-b368-ba66d5689456-kube-api-access-p6js2\") pod \"oauth-openshift-558db77b4-4q8mj\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.260785 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ddqcf" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.277616 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.279793 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4bbc\" (UniqueName: \"kubernetes.io/projected/549e54fa-53eb-4a9d-9578-5cfbd02bb28d-kube-api-access-b4bbc\") pod \"openshift-apiserver-operator-796bbdcf4f-qnhrq\" (UID: \"549e54fa-53eb-4a9d-9578-5cfbd02bb28d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qnhrq" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.302074 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t44w\" (UniqueName: \"kubernetes.io/projected/f9750de6-fc79-440e-8ad4-07acbe4edb49-kube-api-access-8t44w\") pod \"apiserver-76f77b778f-mc4h4\" (UID: \"f9750de6-fc79-440e-8ad4-07acbe4edb49\") " pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.322696 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsv7w\" (UniqueName: \"kubernetes.io/projected/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-kube-api-access-rsv7w\") pod \"route-controller-manager-6576b87f9c-lqcpn\" (UID: \"a9a77e3c-0e93-45f9-ab81-7dfbd2916588\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.385857 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqqd4\" (UniqueName: \"kubernetes.io/projected/6995952d-6d8a-494d-842c-1d5cf9ee1207-kube-api-access-mqqd4\") pod \"openshift-controller-manager-operator-756b6f6bc6-gc9bh\" (UID: \"6995952d-6d8a-494d-842c-1d5cf9ee1207\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc9bh" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.439439 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ddqcf"] Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.444367 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdjg2\" (UniqueName: \"kubernetes.io/projected/8ba1b8ce-8332-45c9-bfb0-9a1842dea009-kube-api-access-tdjg2\") pod \"downloads-7954f5f757-mvqcg\" (UID: \"8ba1b8ce-8332-45c9-bfb0-9a1842dea009\") " pod="openshift-console/downloads-7954f5f757-mvqcg" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.464976 4775 request.go:700] Waited for 1.924366567s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.477935 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4q8mj"] Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.480244 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcvf9\" (UniqueName: \"kubernetes.io/projected/1f3aab1c-726d-4027-b629-e04916bc4f8b-kube-api-access-vcvf9\") pod \"controller-manager-879f6c89f-v2bx4\" (UID: \"1f3aab1c-726d-4027-b629-e04916bc4f8b\") " pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" Jan 23 14:06:40 crc kubenswrapper[4775]: W0123 14:06:40.484164 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3066d31d_92a4_45a7_b368_ba66d5689456.slice/crio-74f4cd2270219100871d3310c76c771eee7c27cb5f3b7f3244692cc8ce1e0535 WatchSource:0}: Error finding container 74f4cd2270219100871d3310c76c771eee7c27cb5f3b7f3244692cc8ce1e0535: Status 404 returned error can't find the container with id 74f4cd2270219100871d3310c76c771eee7c27cb5f3b7f3244692cc8ce1e0535 Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.486950 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.500891 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qnhrq" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.506310 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.520981 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"75a4a1a4529a6e632b8fa862424543e4609219da0d81806f206e32abd5cd95fb"} Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.522317 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"18810222cbc1a0dc699884a78e50885f0c7718049d60a9ccfa905497d5b065d8"} Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.523591 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" event={"ID":"3066d31d-92a4-45a7-b368-ba66d5689456","Type":"ContainerStarted","Data":"74f4cd2270219100871d3310c76c771eee7c27cb5f3b7f3244692cc8ce1e0535"} Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.524886 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"8ddf0268ebdc3fc0acc844a9e2c036935d9f6efb1c5ce9c49a7c74146aae22ed"} Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.526206 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.536128 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.547200 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.552237 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.565831 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.569958 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.586841 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.604003 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-mvqcg" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.606268 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.617704 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc9bh" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.653964 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dbaf4876-b99e-4096-9f36-5c888312ddab-bound-sa-token\") pod \"ingress-operator-5b745b69d9-xpzqz\" (UID: \"dbaf4876-b99e-4096-9f36-5c888312ddab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xpzqz" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.686358 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9782\" (UniqueName: \"kubernetes.io/projected/dbaf4876-b99e-4096-9f36-5c888312ddab-kube-api-access-h9782\") pod \"ingress-operator-5b745b69d9-xpzqz\" (UID: \"dbaf4876-b99e-4096-9f36-5c888312ddab\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xpzqz" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.706366 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xpzqz" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.715185 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwv8t\" (UniqueName: \"kubernetes.io/projected/8ac48e42-bde7-4701-b994-825906603b06-kube-api-access-bwv8t\") pod \"marketplace-operator-79b997595-pmcq8\" (UID: \"8ac48e42-bde7-4701-b994-825906603b06\") " pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.733893 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.734741 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n99rp\" (UniqueName: \"kubernetes.io/projected/2d6b6f17-bb56-49ba-8487-6e07346780a1-kube-api-access-n99rp\") pod \"collect-profiles-29486280-gf96b\" (UID: \"2d6b6f17-bb56-49ba-8487-6e07346780a1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.745720 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k892r\" (UniqueName: \"kubernetes.io/projected/cc6b05de-2295-4c6a-8f11-367da8bdcf00-kube-api-access-k892r\") pod \"etcd-operator-b45778765-bjb9d\" (UID: \"cc6b05de-2295-4c6a-8f11-367da8bdcf00\") " pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.746597 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.774302 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.780764 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f38f7554-61cc-493f-8705-8da5f91d3926-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-577dd\" (UID: \"f38f7554-61cc-493f-8705-8da5f91d3926\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.786990 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.790087 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f38f7554-61cc-493f-8705-8da5f91d3926-config\") pod \"authentication-operator-69f744f599-577dd\" (UID: \"f38f7554-61cc-493f-8705-8da5f91d3926\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.806306 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.847067 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.866983 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.887034 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.926778 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.937327 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltngh\" (UniqueName: \"kubernetes.io/projected/f38f7554-61cc-493f-8705-8da5f91d3926-kube-api-access-ltngh\") pod \"authentication-operator-69f744f599-577dd\" (UID: \"f38f7554-61cc-493f-8705-8da5f91d3926\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.947032 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.956292 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk85x\" (UniqueName: \"kubernetes.io/projected/216b36e4-0e40-4073-9432-d1977dc6e03a-kube-api-access-kk85x\") pod \"machine-approver-56656f9798-zbzw5\" (UID: \"216b36e4-0e40-4073-9432-d1977dc6e03a\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbzw5" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.974702 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.983875 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f38f7554-61cc-493f-8705-8da5f91d3926-serving-cert\") pod \"authentication-operator-69f744f599-577dd\" (UID: \"f38f7554-61cc-493f-8705-8da5f91d3926\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.986618 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.991551 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdgbr\" (UniqueName: \"kubernetes.io/projected/85a9044b-9089-4a6a-87e6-06372c531aa9-kube-api-access-rdgbr\") pod \"machine-api-operator-5694c8668f-svb79\" (UID: \"85a9044b-9089-4a6a-87e6-06372c531aa9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-svb79" Jan 23 14:06:40 crc kubenswrapper[4775]: I0123 14:06:40.999653 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wclcs\" (UniqueName: \"kubernetes.io/projected/13e16abe-9325-4638-8b20-7195b7af8e68-kube-api-access-wclcs\") pod \"control-plane-machine-set-operator-78cbb6b69f-psxgx\" (UID: \"13e16abe-9325-4638-8b20-7195b7af8e68\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-psxgx" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.008699 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.010446 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.026839 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.030202 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/85a9044b-9089-4a6a-87e6-06372c531aa9-images\") pod \"machine-api-operator-5694c8668f-svb79\" (UID: \"85a9044b-9089-4a6a-87e6-06372c531aa9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-svb79" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.047126 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.054918 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/85a9044b-9089-4a6a-87e6-06372c531aa9-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-svb79\" (UID: \"85a9044b-9089-4a6a-87e6-06372c531aa9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-svb79" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.167910 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5th22\" (UniqueName: \"kubernetes.io/projected/ba896a24-e6f2-4480-807b-b3c5b6232cea-kube-api-access-5th22\") pod \"console-operator-58897d9998-7gqzl\" (UID: \"ba896a24-e6f2-4480-807b-b3c5b6232cea\") " pod="openshift-console-operator/console-operator-58897d9998-7gqzl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.168229 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85a9044b-9089-4a6a-87e6-06372c531aa9-config\") pod \"machine-api-operator-5694c8668f-svb79\" (UID: \"85a9044b-9089-4a6a-87e6-06372c531aa9\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-svb79" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.168562 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smh4x\" (UniqueName: \"kubernetes.io/projected/c575b767-e334-406f-849d-e562d70985fd-kube-api-access-smh4x\") pod \"apiserver-7bbb656c7d-tsdcf\" (UID: \"c575b767-e334-406f-849d-e562d70985fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.168608 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.169219 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-psxgx" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.170891 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-svb79" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.170980 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbzw5" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.171019 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-7gqzl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.170894 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.171469 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.171906 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/85b405af-7314-4e53-93a5-252b69153561-bound-sa-token\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.171959 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkptx\" (UniqueName: \"kubernetes.io/projected/85b405af-7314-4e53-93a5-252b69153561-kube-api-access-hkptx\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.172018 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/85b405af-7314-4e53-93a5-252b69153561-registry-certificates\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.172472 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: E0123 14:06:41.173073 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:41.673059185 +0000 UTC m=+148.667887935 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.173548 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/85b405af-7314-4e53-93a5-252b69153561-ca-trust-extracted\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.173660 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/85b405af-7314-4e53-93a5-252b69153561-installation-pull-secrets\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.173699 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/85b405af-7314-4e53-93a5-252b69153561-registry-tls\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.173886 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/85b405af-7314-4e53-93a5-252b69153561-trusted-ca\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.195873 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.276473 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:41 crc kubenswrapper[4775]: E0123 14:06:41.276721 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:41.77668164 +0000 UTC m=+148.771510400 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.276920 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b2e6a5f5-108e-4832-8036-58e1228a7f4f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-2lgz4\" (UID: \"b2e6a5f5-108e-4832-8036-58e1228a7f4f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2lgz4" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.276953 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c7fae259-48f4-4d23-8685-6440a5246423-available-featuregates\") pod \"openshift-config-operator-7777fb866f-4dpv6\" (UID: \"c7fae259-48f4-4d23-8685-6440a5246423\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4dpv6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.276987 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/381c20f8-ed2d-4aa8-b99b-5d85a6eb5526-service-ca-bundle\") pod \"router-default-5444994796-nj2dd\" (UID: \"381c20f8-ed2d-4aa8-b99b-5d85a6eb5526\") " pod="openshift-ingress/router-default-5444994796-nj2dd" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.277003 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/aaac7553-88f9-49bd-811f-e993ad0cd40d-plugins-dir\") pod \"csi-hostpathplugin-c9x8w\" (UID: \"aaac7553-88f9-49bd-811f-e993ad0cd40d\") " pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.277021 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-console-config\") pod \"console-f9d7485db-fgb82\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.277037 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7fae259-48f4-4d23-8685-6440a5246423-serving-cert\") pod \"openshift-config-operator-7777fb866f-4dpv6\" (UID: \"c7fae259-48f4-4d23-8685-6440a5246423\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4dpv6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.277094 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f5d381d-3a9d-4ba4-85fb-e9008e359729-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mm7b2\" (UID: \"0f5d381d-3a9d-4ba4-85fb-e9008e359729\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mm7b2" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.277173 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-oauth-serving-cert\") pod \"console-f9d7485db-fgb82\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.277215 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/85b405af-7314-4e53-93a5-252b69153561-registry-certificates\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.277241 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0f5d381d-3a9d-4ba4-85fb-e9008e359729-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mm7b2\" (UID: \"0f5d381d-3a9d-4ba4-85fb-e9008e359729\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mm7b2" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.278885 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1680ee1-e1af-4c87-b9d9-d29e2b0a5043-auth-proxy-config\") pod \"machine-config-operator-74547568cd-prjn9\" (UID: \"e1680ee1-e1af-4c87-b9d9-d29e2b0a5043\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prjn9" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.279021 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.279080 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ps7f\" (UniqueName: \"kubernetes.io/projected/b2e6a5f5-108e-4832-8036-58e1228a7f4f-kube-api-access-9ps7f\") pod \"multus-admission-controller-857f4d67dd-2lgz4\" (UID: \"b2e6a5f5-108e-4832-8036-58e1228a7f4f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2lgz4" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.280982 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/85b405af-7314-4e53-93a5-252b69153561-registry-certificates\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.282167 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf9wq\" (UniqueName: \"kubernetes.io/projected/0f5d381d-3a9d-4ba4-85fb-e9008e359729-kube-api-access-mf9wq\") pod \"cluster-image-registry-operator-dc59b4c8b-mm7b2\" (UID: \"0f5d381d-3a9d-4ba4-85fb-e9008e359729\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mm7b2" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.282266 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a6821f92-2d15-4dc0-92ed-7a30cef98db9-console-oauth-config\") pod \"console-f9d7485db-fgb82\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.282431 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pmf9\" (UniqueName: \"kubernetes.io/projected/c7fae259-48f4-4d23-8685-6440a5246423-kube-api-access-5pmf9\") pod \"openshift-config-operator-7777fb866f-4dpv6\" (UID: \"c7fae259-48f4-4d23-8685-6440a5246423\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4dpv6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.282517 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/381c20f8-ed2d-4aa8-b99b-5d85a6eb5526-default-certificate\") pod \"router-default-5444994796-nj2dd\" (UID: \"381c20f8-ed2d-4aa8-b99b-5d85a6eb5526\") " pod="openshift-ingress/router-default-5444994796-nj2dd" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.282920 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bplmh\" (UniqueName: \"kubernetes.io/projected/e1680ee1-e1af-4c87-b9d9-d29e2b0a5043-kube-api-access-bplmh\") pod \"machine-config-operator-74547568cd-prjn9\" (UID: \"e1680ee1-e1af-4c87-b9d9-d29e2b0a5043\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prjn9" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.282998 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e1680ee1-e1af-4c87-b9d9-d29e2b0a5043-images\") pod \"machine-config-operator-74547568cd-prjn9\" (UID: \"e1680ee1-e1af-4c87-b9d9-d29e2b0a5043\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prjn9" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.283331 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/85b405af-7314-4e53-93a5-252b69153561-registry-tls\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.283400 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/381c20f8-ed2d-4aa8-b99b-5d85a6eb5526-metrics-certs\") pod \"router-default-5444994796-nj2dd\" (UID: \"381c20f8-ed2d-4aa8-b99b-5d85a6eb5526\") " pod="openshift-ingress/router-default-5444994796-nj2dd" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.283460 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/85b405af-7314-4e53-93a5-252b69153561-trusted-ca\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.283842 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/85b405af-7314-4e53-93a5-252b69153561-bound-sa-token\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.283905 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4j4h\" (UniqueName: \"kubernetes.io/projected/381c20f8-ed2d-4aa8-b99b-5d85a6eb5526-kube-api-access-q4j4h\") pod \"router-default-5444994796-nj2dd\" (UID: \"381c20f8-ed2d-4aa8-b99b-5d85a6eb5526\") " pod="openshift-ingress/router-default-5444994796-nj2dd" Jan 23 14:06:41 crc kubenswrapper[4775]: E0123 14:06:41.284340 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:41.784312798 +0000 UTC m=+148.779141548 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.288799 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkptx\" (UniqueName: \"kubernetes.io/projected/85b405af-7314-4e53-93a5-252b69153561-kube-api-access-hkptx\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.289666 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/85b405af-7314-4e53-93a5-252b69153561-trusted-ca\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.291883 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/0f5d381d-3a9d-4ba4-85fb-e9008e359729-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mm7b2\" (UID: \"0f5d381d-3a9d-4ba4-85fb-e9008e359729\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mm7b2" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.292786 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/aaac7553-88f9-49bd-811f-e993ad0cd40d-csi-data-dir\") pod \"csi-hostpathplugin-c9x8w\" (UID: \"aaac7553-88f9-49bd-811f-e993ad0cd40d\") " pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.293048 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1680ee1-e1af-4c87-b9d9-d29e2b0a5043-proxy-tls\") pod \"machine-config-operator-74547568cd-prjn9\" (UID: \"e1680ee1-e1af-4c87-b9d9-d29e2b0a5043\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prjn9" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.293383 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-trusted-ca-bundle\") pod \"console-f9d7485db-fgb82\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.295428 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stlnp\" (UniqueName: \"kubernetes.io/projected/aaac7553-88f9-49bd-811f-e993ad0cd40d-kube-api-access-stlnp\") pod \"csi-hostpathplugin-c9x8w\" (UID: \"aaac7553-88f9-49bd-811f-e993ad0cd40d\") " pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.295511 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a6821f92-2d15-4dc0-92ed-7a30cef98db9-console-serving-cert\") pod \"console-f9d7485db-fgb82\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.296251 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/85b405af-7314-4e53-93a5-252b69153561-ca-trust-extracted\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.296322 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/aaac7553-88f9-49bd-811f-e993ad0cd40d-socket-dir\") pod \"csi-hostpathplugin-c9x8w\" (UID: \"aaac7553-88f9-49bd-811f-e993ad0cd40d\") " pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.296441 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-service-ca\") pod \"console-f9d7485db-fgb82\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.296489 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/85b405af-7314-4e53-93a5-252b69153561-installation-pull-secrets\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.296907 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/85b405af-7314-4e53-93a5-252b69153561-ca-trust-extracted\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.297892 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/85b405af-7314-4e53-93a5-252b69153561-registry-tls\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.298063 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/381c20f8-ed2d-4aa8-b99b-5d85a6eb5526-stats-auth\") pod \"router-default-5444994796-nj2dd\" (UID: \"381c20f8-ed2d-4aa8-b99b-5d85a6eb5526\") " pod="openshift-ingress/router-default-5444994796-nj2dd" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.298305 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/aaac7553-88f9-49bd-811f-e993ad0cd40d-mountpoint-dir\") pod \"csi-hostpathplugin-c9x8w\" (UID: \"aaac7553-88f9-49bd-811f-e993ad0cd40d\") " pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.298487 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/aaac7553-88f9-49bd-811f-e993ad0cd40d-registration-dir\") pod \"csi-hostpathplugin-c9x8w\" (UID: \"aaac7553-88f9-49bd-811f-e993ad0cd40d\") " pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.298757 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgvmt\" (UniqueName: \"kubernetes.io/projected/a6821f92-2d15-4dc0-92ed-7a30cef98db9-kube-api-access-tgvmt\") pod \"console-f9d7485db-fgb82\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.307202 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/85b405af-7314-4e53-93a5-252b69153561-installation-pull-secrets\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.332177 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/85b405af-7314-4e53-93a5-252b69153561-bound-sa-token\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.368731 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkptx\" (UniqueName: \"kubernetes.io/projected/85b405af-7314-4e53-93a5-252b69153561-kube-api-access-hkptx\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: W0123 14:06:41.372662 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod216b36e4_0e40_4073_9432_d1977dc6e03a.slice/crio-48f09891a71c60da2dd93d7b65738fd065066ef5c721963f0ff962507f68292a WatchSource:0}: Error finding container 48f09891a71c60da2dd93d7b65738fd065066ef5c721963f0ff962507f68292a: Status 404 returned error can't find the container with id 48f09891a71c60da2dd93d7b65738fd065066ef5c721963f0ff962507f68292a Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408060 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408157 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-service-ca\") pod \"console-f9d7485db-fgb82\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408193 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/09c7da5e-ce0a-4a3c-9419-420f63f93f0e-node-bootstrap-token\") pod \"machine-config-server-kmqrn\" (UID: \"09c7da5e-ce0a-4a3c-9419-420f63f93f0e\") " pod="openshift-machine-config-operator/machine-config-server-kmqrn" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408229 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d7707d7a-bfb7-4600-98f4-be607d9e77f4-webhook-cert\") pod \"packageserver-d55dfcdfc-rfbk5\" (UID: \"d7707d7a-bfb7-4600-98f4-be607d9e77f4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408262 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26bb6\" (UniqueName: \"kubernetes.io/projected/98e5fa0e-5fb3-4a38-bcdc-328a22d4460f-kube-api-access-26bb6\") pod \"migrator-59844c95c7-br76j\" (UID: \"98e5fa0e-5fb3-4a38-bcdc-328a22d4460f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-br76j" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408324 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/381c20f8-ed2d-4aa8-b99b-5d85a6eb5526-stats-auth\") pod \"router-default-5444994796-nj2dd\" (UID: \"381c20f8-ed2d-4aa8-b99b-5d85a6eb5526\") " pod="openshift-ingress/router-default-5444994796-nj2dd" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408350 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6d1f9f7b-5676-4445-b8ec-1288e6beff20-metrics-tls\") pod \"dns-default-bvqqf\" (UID: \"6d1f9f7b-5676-4445-b8ec-1288e6beff20\") " pod="openshift-dns/dns-default-bvqqf" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408374 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/aaac7553-88f9-49bd-811f-e993ad0cd40d-mountpoint-dir\") pod \"csi-hostpathplugin-c9x8w\" (UID: \"aaac7553-88f9-49bd-811f-e993ad0cd40d\") " pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408395 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/aaac7553-88f9-49bd-811f-e993ad0cd40d-registration-dir\") pod \"csi-hostpathplugin-c9x8w\" (UID: \"aaac7553-88f9-49bd-811f-e993ad0cd40d\") " pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408418 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a2f579b-0f13-47dd-9566-dd57100ab22a-config\") pod \"kube-controller-manager-operator-78b949d7b-6fmtx\" (UID: \"8a2f579b-0f13-47dd-9566-dd57100ab22a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6fmtx" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408439 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqmss\" (UniqueName: \"kubernetes.io/projected/384fd47a-81d2-4219-8a66-fbeec5bae860-kube-api-access-hqmss\") pod \"olm-operator-6b444d44fb-65w5f\" (UID: \"384fd47a-81d2-4219-8a66-fbeec5bae860\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-65w5f" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408466 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgvmt\" (UniqueName: \"kubernetes.io/projected/a6821f92-2d15-4dc0-92ed-7a30cef98db9-kube-api-access-tgvmt\") pod \"console-f9d7485db-fgb82\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408491 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jmf4\" (UniqueName: \"kubernetes.io/projected/4304b2e3-9359-4caf-94dd-1e31716fee56-kube-api-access-8jmf4\") pod \"catalog-operator-68c6474976-2vnwm\" (UID: \"4304b2e3-9359-4caf-94dd-1e31716fee56\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2vnwm" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408515 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r78j\" (UniqueName: \"kubernetes.io/projected/d7707d7a-bfb7-4600-98f4-be607d9e77f4-kube-api-access-9r78j\") pod \"packageserver-d55dfcdfc-rfbk5\" (UID: \"d7707d7a-bfb7-4600-98f4-be607d9e77f4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408576 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b2e6a5f5-108e-4832-8036-58e1228a7f4f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-2lgz4\" (UID: \"b2e6a5f5-108e-4832-8036-58e1228a7f4f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2lgz4" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408601 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c7fae259-48f4-4d23-8685-6440a5246423-available-featuregates\") pod \"openshift-config-operator-7777fb866f-4dpv6\" (UID: \"c7fae259-48f4-4d23-8685-6440a5246423\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4dpv6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408648 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/381c20f8-ed2d-4aa8-b99b-5d85a6eb5526-service-ca-bundle\") pod \"router-default-5444994796-nj2dd\" (UID: \"381c20f8-ed2d-4aa8-b99b-5d85a6eb5526\") " pod="openshift-ingress/router-default-5444994796-nj2dd" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408704 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/aaac7553-88f9-49bd-811f-e993ad0cd40d-plugins-dir\") pod \"csi-hostpathplugin-c9x8w\" (UID: \"aaac7553-88f9-49bd-811f-e993ad0cd40d\") " pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408748 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-console-config\") pod \"console-f9d7485db-fgb82\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408774 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90f0ee56-8c51-4a42-ae4e-385ff7453aa7-config\") pod \"kube-apiserver-operator-766d6c64bb-xhzp8\" (UID: \"90f0ee56-8c51-4a42-ae4e-385ff7453aa7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhzp8" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408828 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7fae259-48f4-4d23-8685-6440a5246423-serving-cert\") pod \"openshift-config-operator-7777fb866f-4dpv6\" (UID: \"c7fae259-48f4-4d23-8685-6440a5246423\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4dpv6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408884 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f5d381d-3a9d-4ba4-85fb-e9008e359729-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mm7b2\" (UID: \"0f5d381d-3a9d-4ba4-85fb-e9008e359729\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mm7b2" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408913 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d1f9f7b-5676-4445-b8ec-1288e6beff20-config-volume\") pod \"dns-default-bvqqf\" (UID: \"6d1f9f7b-5676-4445-b8ec-1288e6beff20\") " pod="openshift-dns/dns-default-bvqqf" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408929 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-oauth-serving-cert\") pod \"console-f9d7485db-fgb82\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408945 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwmqb\" (UniqueName: \"kubernetes.io/projected/f73c288c-acf3-4ce7-81c7-63953b2fc087-kube-api-access-pwmqb\") pod \"service-ca-9c57cc56f-btttg\" (UID: \"f73c288c-acf3-4ce7-81c7-63953b2fc087\") " pod="openshift-service-ca/service-ca-9c57cc56f-btttg" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408963 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0f5d381d-3a9d-4ba4-85fb-e9008e359729-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mm7b2\" (UID: \"0f5d381d-3a9d-4ba4-85fb-e9008e359729\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mm7b2" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408982 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88ecf2d3-bdec-4fe8-a567-44550e85bb19-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5tss4\" (UID: \"88ecf2d3-bdec-4fe8-a567-44550e85bb19\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tss4" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.408998 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm2g2\" (UniqueName: \"kubernetes.io/projected/88ecf2d3-bdec-4fe8-a567-44550e85bb19-kube-api-access-wm2g2\") pod \"kube-storage-version-migrator-operator-b67b599dd-5tss4\" (UID: \"88ecf2d3-bdec-4fe8-a567-44550e85bb19\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tss4" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409015 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88ecf2d3-bdec-4fe8-a567-44550e85bb19-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5tss4\" (UID: \"88ecf2d3-bdec-4fe8-a567-44550e85bb19\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tss4" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409039 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1680ee1-e1af-4c87-b9d9-d29e2b0a5043-auth-proxy-config\") pod \"machine-config-operator-74547568cd-prjn9\" (UID: \"e1680ee1-e1af-4c87-b9d9-d29e2b0a5043\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prjn9" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409054 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmxm2\" (UniqueName: \"kubernetes.io/projected/6d1f9f7b-5676-4445-b8ec-1288e6beff20-kube-api-access-vmxm2\") pod \"dns-default-bvqqf\" (UID: \"6d1f9f7b-5676-4445-b8ec-1288e6beff20\") " pod="openshift-dns/dns-default-bvqqf" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409071 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/384fd47a-81d2-4219-8a66-fbeec5bae860-profile-collector-cert\") pod \"olm-operator-6b444d44fb-65w5f\" (UID: \"384fd47a-81d2-4219-8a66-fbeec5bae860\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-65w5f" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409094 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a925ae96-5ea9-4dba-9fbf-2ec5f5295026-serving-cert\") pod \"service-ca-operator-777779d784-fmbdl\" (UID: \"a925ae96-5ea9-4dba-9fbf-2ec5f5295026\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmbdl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409110 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ps7f\" (UniqueName: \"kubernetes.io/projected/b2e6a5f5-108e-4832-8036-58e1228a7f4f-kube-api-access-9ps7f\") pod \"multus-admission-controller-857f4d67dd-2lgz4\" (UID: \"b2e6a5f5-108e-4832-8036-58e1228a7f4f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2lgz4" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409135 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf9wq\" (UniqueName: \"kubernetes.io/projected/0f5d381d-3a9d-4ba4-85fb-e9008e359729-kube-api-access-mf9wq\") pod \"cluster-image-registry-operator-dc59b4c8b-mm7b2\" (UID: \"0f5d381d-3a9d-4ba4-85fb-e9008e359729\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mm7b2" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409161 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a6821f92-2d15-4dc0-92ed-7a30cef98db9-console-oauth-config\") pod \"console-f9d7485db-fgb82\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409177 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pmf9\" (UniqueName: \"kubernetes.io/projected/c7fae259-48f4-4d23-8685-6440a5246423-kube-api-access-5pmf9\") pod \"openshift-config-operator-7777fb866f-4dpv6\" (UID: \"c7fae259-48f4-4d23-8685-6440a5246423\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4dpv6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409192 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90f0ee56-8c51-4a42-ae4e-385ff7453aa7-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-xhzp8\" (UID: \"90f0ee56-8c51-4a42-ae4e-385ff7453aa7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhzp8" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409232 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/381c20f8-ed2d-4aa8-b99b-5d85a6eb5526-default-certificate\") pod \"router-default-5444994796-nj2dd\" (UID: \"381c20f8-ed2d-4aa8-b99b-5d85a6eb5526\") " pod="openshift-ingress/router-default-5444994796-nj2dd" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409249 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wxmb\" (UniqueName: \"kubernetes.io/projected/d5dfee7e-59a9-43b1-bd2e-f3200ea5322c-kube-api-access-5wxmb\") pod \"dns-operator-744455d44c-f7z9k\" (UID: \"d5dfee7e-59a9-43b1-bd2e-f3200ea5322c\") " pod="openshift-dns-operator/dns-operator-744455d44c-f7z9k" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409317 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxdqs\" (UniqueName: \"kubernetes.io/projected/924bd720-98da-4f7b-afbc-a7bfa822368f-kube-api-access-kxdqs\") pod \"ingress-canary-m5nll\" (UID: \"924bd720-98da-4f7b-afbc-a7bfa822368f\") " pod="openshift-ingress-canary/ingress-canary-m5nll" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409335 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e1680ee1-e1af-4c87-b9d9-d29e2b0a5043-images\") pod \"machine-config-operator-74547568cd-prjn9\" (UID: \"e1680ee1-e1af-4c87-b9d9-d29e2b0a5043\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prjn9" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409351 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bplmh\" (UniqueName: \"kubernetes.io/projected/e1680ee1-e1af-4c87-b9d9-d29e2b0a5043-kube-api-access-bplmh\") pod \"machine-config-operator-74547568cd-prjn9\" (UID: \"e1680ee1-e1af-4c87-b9d9-d29e2b0a5043\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prjn9" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409376 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d7707d7a-bfb7-4600-98f4-be607d9e77f4-tmpfs\") pod \"packageserver-d55dfcdfc-rfbk5\" (UID: \"d7707d7a-bfb7-4600-98f4-be607d9e77f4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409394 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/381c20f8-ed2d-4aa8-b99b-5d85a6eb5526-metrics-certs\") pod \"router-default-5444994796-nj2dd\" (UID: \"381c20f8-ed2d-4aa8-b99b-5d85a6eb5526\") " pod="openshift-ingress/router-default-5444994796-nj2dd" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409410 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/53bbb237-ded5-402c-9bc3-a1cda18e8cfb-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-lssd6\" (UID: \"53bbb237-ded5-402c-9bc3-a1cda18e8cfb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lssd6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409426 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ad73212-43f6-49db-a38b-678185cbe9d4-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-d74p6\" (UID: \"2ad73212-43f6-49db-a38b-678185cbe9d4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d74p6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409464 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv4f7\" (UniqueName: \"kubernetes.io/projected/09c7da5e-ce0a-4a3c-9419-420f63f93f0e-kube-api-access-cv4f7\") pod \"machine-config-server-kmqrn\" (UID: \"09c7da5e-ce0a-4a3c-9419-420f63f93f0e\") " pod="openshift-machine-config-operator/machine-config-server-kmqrn" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409478 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a925ae96-5ea9-4dba-9fbf-2ec5f5295026-config\") pod \"service-ca-operator-777779d784-fmbdl\" (UID: \"a925ae96-5ea9-4dba-9fbf-2ec5f5295026\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmbdl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409494 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7pvl\" (UniqueName: \"kubernetes.io/projected/53bbb237-ded5-402c-9bc3-a1cda18e8cfb-kube-api-access-j7pvl\") pod \"package-server-manager-789f6589d5-lssd6\" (UID: \"53bbb237-ded5-402c-9bc3-a1cda18e8cfb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lssd6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409508 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/90f0ee56-8c51-4a42-ae4e-385ff7453aa7-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-xhzp8\" (UID: \"90f0ee56-8c51-4a42-ae4e-385ff7453aa7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhzp8" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409522 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/924bd720-98da-4f7b-afbc-a7bfa822368f-cert\") pod \"ingress-canary-m5nll\" (UID: \"924bd720-98da-4f7b-afbc-a7bfa822368f\") " pod="openshift-ingress-canary/ingress-canary-m5nll" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409538 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4j4h\" (UniqueName: \"kubernetes.io/projected/381c20f8-ed2d-4aa8-b99b-5d85a6eb5526-kube-api-access-q4j4h\") pod \"router-default-5444994796-nj2dd\" (UID: \"381c20f8-ed2d-4aa8-b99b-5d85a6eb5526\") " pod="openshift-ingress/router-default-5444994796-nj2dd" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409553 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ad73212-43f6-49db-a38b-678185cbe9d4-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-d74p6\" (UID: \"2ad73212-43f6-49db-a38b-678185cbe9d4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d74p6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409576 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/09c7da5e-ce0a-4a3c-9419-420f63f93f0e-certs\") pod \"machine-config-server-kmqrn\" (UID: \"09c7da5e-ce0a-4a3c-9419-420f63f93f0e\") " pod="openshift-machine-config-operator/machine-config-server-kmqrn" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409591 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8w6h\" (UniqueName: \"kubernetes.io/projected/a925ae96-5ea9-4dba-9fbf-2ec5f5295026-kube-api-access-w8w6h\") pod \"service-ca-operator-777779d784-fmbdl\" (UID: \"a925ae96-5ea9-4dba-9fbf-2ec5f5295026\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmbdl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409614 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/384fd47a-81d2-4219-8a66-fbeec5bae860-srv-cert\") pod \"olm-operator-6b444d44fb-65w5f\" (UID: \"384fd47a-81d2-4219-8a66-fbeec5bae860\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-65w5f" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409629 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6e802822-9935-46de-947b-c77bf8da4f9e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rknc7\" (UID: \"6e802822-9935-46de-947b-c77bf8da4f9e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rknc7" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409646 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a2f579b-0f13-47dd-9566-dd57100ab22a-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6fmtx\" (UID: \"8a2f579b-0f13-47dd-9566-dd57100ab22a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6fmtx" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409664 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ad73212-43f6-49db-a38b-678185cbe9d4-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-d74p6\" (UID: \"2ad73212-43f6-49db-a38b-678185cbe9d4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d74p6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409679 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6e802822-9935-46de-947b-c77bf8da4f9e-proxy-tls\") pod \"machine-config-controller-84d6567774-rknc7\" (UID: \"6e802822-9935-46de-947b-c77bf8da4f9e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rknc7" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409705 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4kh7\" (UniqueName: \"kubernetes.io/projected/6e802822-9935-46de-947b-c77bf8da4f9e-kube-api-access-z4kh7\") pod \"machine-config-controller-84d6567774-rknc7\" (UID: \"6e802822-9935-46de-947b-c77bf8da4f9e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rknc7" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409723 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/0f5d381d-3a9d-4ba4-85fb-e9008e359729-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mm7b2\" (UID: \"0f5d381d-3a9d-4ba4-85fb-e9008e359729\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mm7b2" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409750 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/aaac7553-88f9-49bd-811f-e993ad0cd40d-csi-data-dir\") pod \"csi-hostpathplugin-c9x8w\" (UID: \"aaac7553-88f9-49bd-811f-e993ad0cd40d\") " pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409774 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1680ee1-e1af-4c87-b9d9-d29e2b0a5043-proxy-tls\") pod \"machine-config-operator-74547568cd-prjn9\" (UID: \"e1680ee1-e1af-4c87-b9d9-d29e2b0a5043\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prjn9" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409797 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d7707d7a-bfb7-4600-98f4-be607d9e77f4-apiservice-cert\") pod \"packageserver-d55dfcdfc-rfbk5\" (UID: \"d7707d7a-bfb7-4600-98f4-be607d9e77f4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409838 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8a2f579b-0f13-47dd-9566-dd57100ab22a-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6fmtx\" (UID: \"8a2f579b-0f13-47dd-9566-dd57100ab22a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6fmtx" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409853 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4304b2e3-9359-4caf-94dd-1e31716fee56-srv-cert\") pod \"catalog-operator-68c6474976-2vnwm\" (UID: \"4304b2e3-9359-4caf-94dd-1e31716fee56\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2vnwm" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409869 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d5dfee7e-59a9-43b1-bd2e-f3200ea5322c-metrics-tls\") pod \"dns-operator-744455d44c-f7z9k\" (UID: \"d5dfee7e-59a9-43b1-bd2e-f3200ea5322c\") " pod="openshift-dns-operator/dns-operator-744455d44c-f7z9k" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409888 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f73c288c-acf3-4ce7-81c7-63953b2fc087-signing-key\") pod \"service-ca-9c57cc56f-btttg\" (UID: \"f73c288c-acf3-4ce7-81c7-63953b2fc087\") " pod="openshift-service-ca/service-ca-9c57cc56f-btttg" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409937 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-trusted-ca-bundle\") pod \"console-f9d7485db-fgb82\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409972 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f73c288c-acf3-4ce7-81c7-63953b2fc087-signing-cabundle\") pod \"service-ca-9c57cc56f-btttg\" (UID: \"f73c288c-acf3-4ce7-81c7-63953b2fc087\") " pod="openshift-service-ca/service-ca-9c57cc56f-btttg" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.409994 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a6821f92-2d15-4dc0-92ed-7a30cef98db9-console-serving-cert\") pod \"console-f9d7485db-fgb82\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.410030 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stlnp\" (UniqueName: \"kubernetes.io/projected/aaac7553-88f9-49bd-811f-e993ad0cd40d-kube-api-access-stlnp\") pod \"csi-hostpathplugin-c9x8w\" (UID: \"aaac7553-88f9-49bd-811f-e993ad0cd40d\") " pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.410089 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/aaac7553-88f9-49bd-811f-e993ad0cd40d-socket-dir\") pod \"csi-hostpathplugin-c9x8w\" (UID: \"aaac7553-88f9-49bd-811f-e993ad0cd40d\") " pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.410111 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4304b2e3-9359-4caf-94dd-1e31716fee56-profile-collector-cert\") pod \"catalog-operator-68c6474976-2vnwm\" (UID: \"4304b2e3-9359-4caf-94dd-1e31716fee56\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2vnwm" Jan 23 14:06:41 crc kubenswrapper[4775]: E0123 14:06:41.410231 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:41.910213088 +0000 UTC m=+148.905041828 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.413448 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e1680ee1-e1af-4c87-b9d9-d29e2b0a5043-images\") pod \"machine-config-operator-74547568cd-prjn9\" (UID: \"e1680ee1-e1af-4c87-b9d9-d29e2b0a5043\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prjn9" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.416160 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/aaac7553-88f9-49bd-811f-e993ad0cd40d-socket-dir\") pod \"csi-hostpathplugin-c9x8w\" (UID: \"aaac7553-88f9-49bd-811f-e993ad0cd40d\") " pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.416519 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1680ee1-e1af-4c87-b9d9-d29e2b0a5043-auth-proxy-config\") pod \"machine-config-operator-74547568cd-prjn9\" (UID: \"e1680ee1-e1af-4c87-b9d9-d29e2b0a5043\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prjn9" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.417434 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f5d381d-3a9d-4ba4-85fb-e9008e359729-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mm7b2\" (UID: \"0f5d381d-3a9d-4ba4-85fb-e9008e359729\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mm7b2" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.417444 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/aaac7553-88f9-49bd-811f-e993ad0cd40d-plugins-dir\") pod \"csi-hostpathplugin-c9x8w\" (UID: \"aaac7553-88f9-49bd-811f-e993ad0cd40d\") " pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.417739 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-oauth-serving-cert\") pod \"console-f9d7485db-fgb82\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.417764 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/aaac7553-88f9-49bd-811f-e993ad0cd40d-mountpoint-dir\") pod \"csi-hostpathplugin-c9x8w\" (UID: \"aaac7553-88f9-49bd-811f-e993ad0cd40d\") " pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.418294 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/aaac7553-88f9-49bd-811f-e993ad0cd40d-csi-data-dir\") pod \"csi-hostpathplugin-c9x8w\" (UID: \"aaac7553-88f9-49bd-811f-e993ad0cd40d\") " pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.418719 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/aaac7553-88f9-49bd-811f-e993ad0cd40d-registration-dir\") pod \"csi-hostpathplugin-c9x8w\" (UID: \"aaac7553-88f9-49bd-811f-e993ad0cd40d\") " pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.419006 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/c7fae259-48f4-4d23-8685-6440a5246423-available-featuregates\") pod \"openshift-config-operator-7777fb866f-4dpv6\" (UID: \"c7fae259-48f4-4d23-8685-6440a5246423\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4dpv6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.420570 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/381c20f8-ed2d-4aa8-b99b-5d85a6eb5526-service-ca-bundle\") pod \"router-default-5444994796-nj2dd\" (UID: \"381c20f8-ed2d-4aa8-b99b-5d85a6eb5526\") " pod="openshift-ingress/router-default-5444994796-nj2dd" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.420850 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-console-config\") pod \"console-f9d7485db-fgb82\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.421145 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/381c20f8-ed2d-4aa8-b99b-5d85a6eb5526-default-certificate\") pod \"router-default-5444994796-nj2dd\" (UID: \"381c20f8-ed2d-4aa8-b99b-5d85a6eb5526\") " pod="openshift-ingress/router-default-5444994796-nj2dd" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.421955 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/0f5d381d-3a9d-4ba4-85fb-e9008e359729-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mm7b2\" (UID: \"0f5d381d-3a9d-4ba4-85fb-e9008e359729\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mm7b2" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.422247 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a6821f92-2d15-4dc0-92ed-7a30cef98db9-console-serving-cert\") pod \"console-f9d7485db-fgb82\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.422328 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1680ee1-e1af-4c87-b9d9-d29e2b0a5043-proxy-tls\") pod \"machine-config-operator-74547568cd-prjn9\" (UID: \"e1680ee1-e1af-4c87-b9d9-d29e2b0a5043\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prjn9" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.423109 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c7fae259-48f4-4d23-8685-6440a5246423-serving-cert\") pod \"openshift-config-operator-7777fb866f-4dpv6\" (UID: \"c7fae259-48f4-4d23-8685-6440a5246423\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4dpv6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.423287 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-service-ca\") pod \"console-f9d7485db-fgb82\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.428364 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/381c20f8-ed2d-4aa8-b99b-5d85a6eb5526-metrics-certs\") pod \"router-default-5444994796-nj2dd\" (UID: \"381c20f8-ed2d-4aa8-b99b-5d85a6eb5526\") " pod="openshift-ingress/router-default-5444994796-nj2dd" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.428660 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-trusted-ca-bundle\") pod \"console-f9d7485db-fgb82\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.432968 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/381c20f8-ed2d-4aa8-b99b-5d85a6eb5526-stats-auth\") pod \"router-default-5444994796-nj2dd\" (UID: \"381c20f8-ed2d-4aa8-b99b-5d85a6eb5526\") " pod="openshift-ingress/router-default-5444994796-nj2dd" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.440485 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b2e6a5f5-108e-4832-8036-58e1228a7f4f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-2lgz4\" (UID: \"b2e6a5f5-108e-4832-8036-58e1228a7f4f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2lgz4" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.440769 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a6821f92-2d15-4dc0-92ed-7a30cef98db9-console-oauth-config\") pod \"console-f9d7485db-fgb82\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.443411 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bplmh\" (UniqueName: \"kubernetes.io/projected/e1680ee1-e1af-4c87-b9d9-d29e2b0a5043-kube-api-access-bplmh\") pod \"machine-config-operator-74547568cd-prjn9\" (UID: \"e1680ee1-e1af-4c87-b9d9-d29e2b0a5043\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prjn9" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.446957 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0f5d381d-3a9d-4ba4-85fb-e9008e359729-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mm7b2\" (UID: \"0f5d381d-3a9d-4ba4-85fb-e9008e359729\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mm7b2" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.465594 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4j4h\" (UniqueName: \"kubernetes.io/projected/381c20f8-ed2d-4aa8-b99b-5d85a6eb5526-kube-api-access-q4j4h\") pod \"router-default-5444994796-nj2dd\" (UID: \"381c20f8-ed2d-4aa8-b99b-5d85a6eb5526\") " pod="openshift-ingress/router-default-5444994796-nj2dd" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.480520 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-577dd"] Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.494750 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stlnp\" (UniqueName: \"kubernetes.io/projected/aaac7553-88f9-49bd-811f-e993ad0cd40d-kube-api-access-stlnp\") pod \"csi-hostpathplugin-c9x8w\" (UID: \"aaac7553-88f9-49bd-811f-e993ad0cd40d\") " pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.505526 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ps7f\" (UniqueName: \"kubernetes.io/projected/b2e6a5f5-108e-4832-8036-58e1228a7f4f-kube-api-access-9ps7f\") pod \"multus-admission-controller-857f4d67dd-2lgz4\" (UID: \"b2e6a5f5-108e-4832-8036-58e1228a7f4f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-2lgz4" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.511656 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d7707d7a-bfb7-4600-98f4-be607d9e77f4-apiservice-cert\") pod \"packageserver-d55dfcdfc-rfbk5\" (UID: \"d7707d7a-bfb7-4600-98f4-be607d9e77f4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.512407 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8a2f579b-0f13-47dd-9566-dd57100ab22a-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6fmtx\" (UID: \"8a2f579b-0f13-47dd-9566-dd57100ab22a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6fmtx" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.512464 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4304b2e3-9359-4caf-94dd-1e31716fee56-srv-cert\") pod \"catalog-operator-68c6474976-2vnwm\" (UID: \"4304b2e3-9359-4caf-94dd-1e31716fee56\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2vnwm" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.512490 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d5dfee7e-59a9-43b1-bd2e-f3200ea5322c-metrics-tls\") pod \"dns-operator-744455d44c-f7z9k\" (UID: \"d5dfee7e-59a9-43b1-bd2e-f3200ea5322c\") " pod="openshift-dns-operator/dns-operator-744455d44c-f7z9k" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.512509 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f73c288c-acf3-4ce7-81c7-63953b2fc087-signing-key\") pod \"service-ca-9c57cc56f-btttg\" (UID: \"f73c288c-acf3-4ce7-81c7-63953b2fc087\") " pod="openshift-service-ca/service-ca-9c57cc56f-btttg" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.512539 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f73c288c-acf3-4ce7-81c7-63953b2fc087-signing-cabundle\") pod \"service-ca-9c57cc56f-btttg\" (UID: \"f73c288c-acf3-4ce7-81c7-63953b2fc087\") " pod="openshift-service-ca/service-ca-9c57cc56f-btttg" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.512595 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4304b2e3-9359-4caf-94dd-1e31716fee56-profile-collector-cert\") pod \"catalog-operator-68c6474976-2vnwm\" (UID: \"4304b2e3-9359-4caf-94dd-1e31716fee56\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2vnwm" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.512624 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/09c7da5e-ce0a-4a3c-9419-420f63f93f0e-node-bootstrap-token\") pod \"machine-config-server-kmqrn\" (UID: \"09c7da5e-ce0a-4a3c-9419-420f63f93f0e\") " pod="openshift-machine-config-operator/machine-config-server-kmqrn" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.512663 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d7707d7a-bfb7-4600-98f4-be607d9e77f4-webhook-cert\") pod \"packageserver-d55dfcdfc-rfbk5\" (UID: \"d7707d7a-bfb7-4600-98f4-be607d9e77f4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.512693 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26bb6\" (UniqueName: \"kubernetes.io/projected/98e5fa0e-5fb3-4a38-bcdc-328a22d4460f-kube-api-access-26bb6\") pod \"migrator-59844c95c7-br76j\" (UID: \"98e5fa0e-5fb3-4a38-bcdc-328a22d4460f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-br76j" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.512742 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6d1f9f7b-5676-4445-b8ec-1288e6beff20-metrics-tls\") pod \"dns-default-bvqqf\" (UID: \"6d1f9f7b-5676-4445-b8ec-1288e6beff20\") " pod="openshift-dns/dns-default-bvqqf" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.512769 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a2f579b-0f13-47dd-9566-dd57100ab22a-config\") pod \"kube-controller-manager-operator-78b949d7b-6fmtx\" (UID: \"8a2f579b-0f13-47dd-9566-dd57100ab22a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6fmtx" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.512793 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqmss\" (UniqueName: \"kubernetes.io/projected/384fd47a-81d2-4219-8a66-fbeec5bae860-kube-api-access-hqmss\") pod \"olm-operator-6b444d44fb-65w5f\" (UID: \"384fd47a-81d2-4219-8a66-fbeec5bae860\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-65w5f" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.513513 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f73c288c-acf3-4ce7-81c7-63953b2fc087-signing-cabundle\") pod \"service-ca-9c57cc56f-btttg\" (UID: \"f73c288c-acf3-4ce7-81c7-63953b2fc087\") " pod="openshift-service-ca/service-ca-9c57cc56f-btttg" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.514524 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a2f579b-0f13-47dd-9566-dd57100ab22a-config\") pod \"kube-controller-manager-operator-78b949d7b-6fmtx\" (UID: \"8a2f579b-0f13-47dd-9566-dd57100ab22a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6fmtx" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.514590 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jmf4\" (UniqueName: \"kubernetes.io/projected/4304b2e3-9359-4caf-94dd-1e31716fee56-kube-api-access-8jmf4\") pod \"catalog-operator-68c6474976-2vnwm\" (UID: \"4304b2e3-9359-4caf-94dd-1e31716fee56\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2vnwm" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.514616 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r78j\" (UniqueName: \"kubernetes.io/projected/d7707d7a-bfb7-4600-98f4-be607d9e77f4-kube-api-access-9r78j\") pod \"packageserver-d55dfcdfc-rfbk5\" (UID: \"d7707d7a-bfb7-4600-98f4-be607d9e77f4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.514672 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90f0ee56-8c51-4a42-ae4e-385ff7453aa7-config\") pod \"kube-apiserver-operator-766d6c64bb-xhzp8\" (UID: \"90f0ee56-8c51-4a42-ae4e-385ff7453aa7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhzp8" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.514715 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d1f9f7b-5676-4445-b8ec-1288e6beff20-config-volume\") pod \"dns-default-bvqqf\" (UID: \"6d1f9f7b-5676-4445-b8ec-1288e6beff20\") " pod="openshift-dns/dns-default-bvqqf" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.514743 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwmqb\" (UniqueName: \"kubernetes.io/projected/f73c288c-acf3-4ce7-81c7-63953b2fc087-kube-api-access-pwmqb\") pod \"service-ca-9c57cc56f-btttg\" (UID: \"f73c288c-acf3-4ce7-81c7-63953b2fc087\") " pod="openshift-service-ca/service-ca-9c57cc56f-btttg" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.514775 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88ecf2d3-bdec-4fe8-a567-44550e85bb19-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5tss4\" (UID: \"88ecf2d3-bdec-4fe8-a567-44550e85bb19\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tss4" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.514795 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm2g2\" (UniqueName: \"kubernetes.io/projected/88ecf2d3-bdec-4fe8-a567-44550e85bb19-kube-api-access-wm2g2\") pod \"kube-storage-version-migrator-operator-b67b599dd-5tss4\" (UID: \"88ecf2d3-bdec-4fe8-a567-44550e85bb19\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tss4" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.514857 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88ecf2d3-bdec-4fe8-a567-44550e85bb19-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5tss4\" (UID: \"88ecf2d3-bdec-4fe8-a567-44550e85bb19\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tss4" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.514879 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmxm2\" (UniqueName: \"kubernetes.io/projected/6d1f9f7b-5676-4445-b8ec-1288e6beff20-kube-api-access-vmxm2\") pod \"dns-default-bvqqf\" (UID: \"6d1f9f7b-5676-4445-b8ec-1288e6beff20\") " pod="openshift-dns/dns-default-bvqqf" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.514904 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/384fd47a-81d2-4219-8a66-fbeec5bae860-profile-collector-cert\") pod \"olm-operator-6b444d44fb-65w5f\" (UID: \"384fd47a-81d2-4219-8a66-fbeec5bae860\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-65w5f" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.514936 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.514956 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a925ae96-5ea9-4dba-9fbf-2ec5f5295026-serving-cert\") pod \"service-ca-operator-777779d784-fmbdl\" (UID: \"a925ae96-5ea9-4dba-9fbf-2ec5f5295026\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmbdl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.515006 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90f0ee56-8c51-4a42-ae4e-385ff7453aa7-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-xhzp8\" (UID: \"90f0ee56-8c51-4a42-ae4e-385ff7453aa7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhzp8" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.515029 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wxmb\" (UniqueName: \"kubernetes.io/projected/d5dfee7e-59a9-43b1-bd2e-f3200ea5322c-kube-api-access-5wxmb\") pod \"dns-operator-744455d44c-f7z9k\" (UID: \"d5dfee7e-59a9-43b1-bd2e-f3200ea5322c\") " pod="openshift-dns-operator/dns-operator-744455d44c-f7z9k" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.515076 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxdqs\" (UniqueName: \"kubernetes.io/projected/924bd720-98da-4f7b-afbc-a7bfa822368f-kube-api-access-kxdqs\") pod \"ingress-canary-m5nll\" (UID: \"924bd720-98da-4f7b-afbc-a7bfa822368f\") " pod="openshift-ingress-canary/ingress-canary-m5nll" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.515109 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d7707d7a-bfb7-4600-98f4-be607d9e77f4-tmpfs\") pod \"packageserver-d55dfcdfc-rfbk5\" (UID: \"d7707d7a-bfb7-4600-98f4-be607d9e77f4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.515133 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/53bbb237-ded5-402c-9bc3-a1cda18e8cfb-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-lssd6\" (UID: \"53bbb237-ded5-402c-9bc3-a1cda18e8cfb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lssd6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.515151 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ad73212-43f6-49db-a38b-678185cbe9d4-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-d74p6\" (UID: \"2ad73212-43f6-49db-a38b-678185cbe9d4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d74p6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.515194 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv4f7\" (UniqueName: \"kubernetes.io/projected/09c7da5e-ce0a-4a3c-9419-420f63f93f0e-kube-api-access-cv4f7\") pod \"machine-config-server-kmqrn\" (UID: \"09c7da5e-ce0a-4a3c-9419-420f63f93f0e\") " pod="openshift-machine-config-operator/machine-config-server-kmqrn" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.515212 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a925ae96-5ea9-4dba-9fbf-2ec5f5295026-config\") pod \"service-ca-operator-777779d784-fmbdl\" (UID: \"a925ae96-5ea9-4dba-9fbf-2ec5f5295026\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmbdl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.515231 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7pvl\" (UniqueName: \"kubernetes.io/projected/53bbb237-ded5-402c-9bc3-a1cda18e8cfb-kube-api-access-j7pvl\") pod \"package-server-manager-789f6589d5-lssd6\" (UID: \"53bbb237-ded5-402c-9bc3-a1cda18e8cfb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lssd6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.515254 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/90f0ee56-8c51-4a42-ae4e-385ff7453aa7-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-xhzp8\" (UID: \"90f0ee56-8c51-4a42-ae4e-385ff7453aa7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhzp8" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.515275 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/924bd720-98da-4f7b-afbc-a7bfa822368f-cert\") pod \"ingress-canary-m5nll\" (UID: \"924bd720-98da-4f7b-afbc-a7bfa822368f\") " pod="openshift-ingress-canary/ingress-canary-m5nll" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.515301 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ad73212-43f6-49db-a38b-678185cbe9d4-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-d74p6\" (UID: \"2ad73212-43f6-49db-a38b-678185cbe9d4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d74p6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.515323 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/09c7da5e-ce0a-4a3c-9419-420f63f93f0e-certs\") pod \"machine-config-server-kmqrn\" (UID: \"09c7da5e-ce0a-4a3c-9419-420f63f93f0e\") " pod="openshift-machine-config-operator/machine-config-server-kmqrn" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.515341 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8w6h\" (UniqueName: \"kubernetes.io/projected/a925ae96-5ea9-4dba-9fbf-2ec5f5295026-kube-api-access-w8w6h\") pod \"service-ca-operator-777779d784-fmbdl\" (UID: \"a925ae96-5ea9-4dba-9fbf-2ec5f5295026\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmbdl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.515362 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/384fd47a-81d2-4219-8a66-fbeec5bae860-srv-cert\") pod \"olm-operator-6b444d44fb-65w5f\" (UID: \"384fd47a-81d2-4219-8a66-fbeec5bae860\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-65w5f" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.515379 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6e802822-9935-46de-947b-c77bf8da4f9e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rknc7\" (UID: \"6e802822-9935-46de-947b-c77bf8da4f9e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rknc7" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.515401 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a2f579b-0f13-47dd-9566-dd57100ab22a-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6fmtx\" (UID: \"8a2f579b-0f13-47dd-9566-dd57100ab22a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6fmtx" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.515418 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ad73212-43f6-49db-a38b-678185cbe9d4-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-d74p6\" (UID: \"2ad73212-43f6-49db-a38b-678185cbe9d4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d74p6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.515433 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6e802822-9935-46de-947b-c77bf8da4f9e-proxy-tls\") pod \"machine-config-controller-84d6567774-rknc7\" (UID: \"6e802822-9935-46de-947b-c77bf8da4f9e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rknc7" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.515460 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4kh7\" (UniqueName: \"kubernetes.io/projected/6e802822-9935-46de-947b-c77bf8da4f9e-kube-api-access-z4kh7\") pod \"machine-config-controller-84d6567774-rknc7\" (UID: \"6e802822-9935-46de-947b-c77bf8da4f9e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rknc7" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.515540 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d1f9f7b-5676-4445-b8ec-1288e6beff20-config-volume\") pod \"dns-default-bvqqf\" (UID: \"6d1f9f7b-5676-4445-b8ec-1288e6beff20\") " pod="openshift-dns/dns-default-bvqqf" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.516195 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d5dfee7e-59a9-43b1-bd2e-f3200ea5322c-metrics-tls\") pod \"dns-operator-744455d44c-f7z9k\" (UID: \"d5dfee7e-59a9-43b1-bd2e-f3200ea5322c\") " pod="openshift-dns-operator/dns-operator-744455d44c-f7z9k" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.516197 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6d1f9f7b-5676-4445-b8ec-1288e6beff20-metrics-tls\") pod \"dns-default-bvqqf\" (UID: \"6d1f9f7b-5676-4445-b8ec-1288e6beff20\") " pod="openshift-dns/dns-default-bvqqf" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.516549 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d7707d7a-bfb7-4600-98f4-be607d9e77f4-apiservice-cert\") pod \"packageserver-d55dfcdfc-rfbk5\" (UID: \"d7707d7a-bfb7-4600-98f4-be607d9e77f4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5" Jan 23 14:06:41 crc kubenswrapper[4775]: E0123 14:06:41.516554 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:42.016540193 +0000 UTC m=+149.011368933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.517662 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4304b2e3-9359-4caf-94dd-1e31716fee56-profile-collector-cert\") pod \"catalog-operator-68c6474976-2vnwm\" (UID: \"4304b2e3-9359-4caf-94dd-1e31716fee56\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2vnwm" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.517910 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6e802822-9935-46de-947b-c77bf8da4f9e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rknc7\" (UID: \"6e802822-9935-46de-947b-c77bf8da4f9e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rknc7" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.518417 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d7707d7a-bfb7-4600-98f4-be607d9e77f4-webhook-cert\") pod \"packageserver-d55dfcdfc-rfbk5\" (UID: \"d7707d7a-bfb7-4600-98f4-be607d9e77f4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.518427 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4304b2e3-9359-4caf-94dd-1e31716fee56-srv-cert\") pod \"catalog-operator-68c6474976-2vnwm\" (UID: \"4304b2e3-9359-4caf-94dd-1e31716fee56\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2vnwm" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.519036 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d7707d7a-bfb7-4600-98f4-be607d9e77f4-tmpfs\") pod \"packageserver-d55dfcdfc-rfbk5\" (UID: \"d7707d7a-bfb7-4600-98f4-be607d9e77f4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.519081 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88ecf2d3-bdec-4fe8-a567-44550e85bb19-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-5tss4\" (UID: \"88ecf2d3-bdec-4fe8-a567-44550e85bb19\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tss4" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.519341 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90f0ee56-8c51-4a42-ae4e-385ff7453aa7-config\") pod \"kube-apiserver-operator-766d6c64bb-xhzp8\" (UID: \"90f0ee56-8c51-4a42-ae4e-385ff7453aa7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhzp8" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.521373 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a925ae96-5ea9-4dba-9fbf-2ec5f5295026-config\") pod \"service-ca-operator-777779d784-fmbdl\" (UID: \"a925ae96-5ea9-4dba-9fbf-2ec5f5295026\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmbdl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.521995 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ad73212-43f6-49db-a38b-678185cbe9d4-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-d74p6\" (UID: \"2ad73212-43f6-49db-a38b-678185cbe9d4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d74p6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.522280 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/09c7da5e-ce0a-4a3c-9419-420f63f93f0e-node-bootstrap-token\") pod \"machine-config-server-kmqrn\" (UID: \"09c7da5e-ce0a-4a3c-9419-420f63f93f0e\") " pod="openshift-machine-config-operator/machine-config-server-kmqrn" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.522610 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90f0ee56-8c51-4a42-ae4e-385ff7453aa7-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-xhzp8\" (UID: \"90f0ee56-8c51-4a42-ae4e-385ff7453aa7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhzp8" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.522704 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f73c288c-acf3-4ce7-81c7-63953b2fc087-signing-key\") pod \"service-ca-9c57cc56f-btttg\" (UID: \"f73c288c-acf3-4ce7-81c7-63953b2fc087\") " pod="openshift-service-ca/service-ca-9c57cc56f-btttg" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.523463 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/88ecf2d3-bdec-4fe8-a567-44550e85bb19-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-5tss4\" (UID: \"88ecf2d3-bdec-4fe8-a567-44550e85bb19\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tss4" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.523498 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a925ae96-5ea9-4dba-9fbf-2ec5f5295026-serving-cert\") pod \"service-ca-operator-777779d784-fmbdl\" (UID: \"a925ae96-5ea9-4dba-9fbf-2ec5f5295026\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmbdl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.525275 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8a2f579b-0f13-47dd-9566-dd57100ab22a-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-6fmtx\" (UID: \"8a2f579b-0f13-47dd-9566-dd57100ab22a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6fmtx" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.525319 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ad73212-43f6-49db-a38b-678185cbe9d4-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-d74p6\" (UID: \"2ad73212-43f6-49db-a38b-678185cbe9d4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d74p6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.526311 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/09c7da5e-ce0a-4a3c-9419-420f63f93f0e-certs\") pod \"machine-config-server-kmqrn\" (UID: \"09c7da5e-ce0a-4a3c-9419-420f63f93f0e\") " pod="openshift-machine-config-operator/machine-config-server-kmqrn" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.529019 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/384fd47a-81d2-4219-8a66-fbeec5bae860-profile-collector-cert\") pod \"olm-operator-6b444d44fb-65w5f\" (UID: \"384fd47a-81d2-4219-8a66-fbeec5bae860\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-65w5f" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.532321 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf9wq\" (UniqueName: \"kubernetes.io/projected/0f5d381d-3a9d-4ba4-85fb-e9008e359729-kube-api-access-mf9wq\") pod \"cluster-image-registry-operator-dc59b4c8b-mm7b2\" (UID: \"0f5d381d-3a9d-4ba4-85fb-e9008e359729\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mm7b2" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.533340 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/53bbb237-ded5-402c-9bc3-a1cda18e8cfb-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-lssd6\" (UID: \"53bbb237-ded5-402c-9bc3-a1cda18e8cfb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lssd6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.533441 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/924bd720-98da-4f7b-afbc-a7bfa822368f-cert\") pod \"ingress-canary-m5nll\" (UID: \"924bd720-98da-4f7b-afbc-a7bfa822368f\") " pod="openshift-ingress-canary/ingress-canary-m5nll" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.533560 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6e802822-9935-46de-947b-c77bf8da4f9e-proxy-tls\") pod \"machine-config-controller-84d6567774-rknc7\" (UID: \"6e802822-9935-46de-947b-c77bf8da4f9e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rknc7" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.534388 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/384fd47a-81d2-4219-8a66-fbeec5bae860-srv-cert\") pod \"olm-operator-6b444d44fb-65w5f\" (UID: \"384fd47a-81d2-4219-8a66-fbeec5bae860\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-65w5f" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.542512 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ddqcf" event={"ID":"a5c75370-d1c6-43bd-a8e8-8836ea5bdb22","Type":"ContainerStarted","Data":"bb145b8c8f1d9c65d19d51f3fc510aab7854b83b2164cb5cb8f17aa62cb2de6b"} Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.545395 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pmf9\" (UniqueName: \"kubernetes.io/projected/c7fae259-48f4-4d23-8685-6440a5246423-kube-api-access-5pmf9\") pod \"openshift-config-operator-7777fb866f-4dpv6\" (UID: \"c7fae259-48f4-4d23-8685-6440a5246423\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-4dpv6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.545533 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"0f24ea82f10e41d59727dab54387a8dc961e3bac03585b6673fa010f71e431ce"} Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.545700 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.558667 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"bc3404126e9619f10821b8e85b5f5bbeb0506f42b79c1742b2e37d0e6f7014f5"} Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.561462 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgvmt\" (UniqueName: \"kubernetes.io/projected/a6821f92-2d15-4dc0-92ed-7a30cef98db9-kube-api-access-tgvmt\") pod \"console-f9d7485db-fgb82\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.565485 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbzw5" event={"ID":"216b36e4-0e40-4073-9432-d1977dc6e03a","Type":"ContainerStarted","Data":"48f09891a71c60da2dd93d7b65738fd065066ef5c721963f0ff962507f68292a"} Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.575880 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"57beb13a8a81752da92a78b6f82384b2c9ba4c377404b875c31ea5a59e72cd20"} Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.587167 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4dpv6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.594596 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.611855 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8a2f579b-0f13-47dd-9566-dd57100ab22a-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-6fmtx\" (UID: \"8a2f579b-0f13-47dd-9566-dd57100ab22a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6fmtx" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.619200 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:41 crc kubenswrapper[4775]: E0123 14:06:41.620116 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:42.120096146 +0000 UTC m=+149.114924886 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.626310 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqmss\" (UniqueName: \"kubernetes.io/projected/384fd47a-81d2-4219-8a66-fbeec5bae860-kube-api-access-hqmss\") pod \"olm-operator-6b444d44fb-65w5f\" (UID: \"384fd47a-81d2-4219-8a66-fbeec5bae860\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-65w5f" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.646193 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-nj2dd" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.657152 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26bb6\" (UniqueName: \"kubernetes.io/projected/98e5fa0e-5fb3-4a38-bcdc-328a22d4460f-kube-api-access-26bb6\") pod \"migrator-59844c95c7-br76j\" (UID: \"98e5fa0e-5fb3-4a38-bcdc-328a22d4460f\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-br76j" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.666971 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jmf4\" (UniqueName: \"kubernetes.io/projected/4304b2e3-9359-4caf-94dd-1e31716fee56-kube-api-access-8jmf4\") pod \"catalog-operator-68c6474976-2vnwm\" (UID: \"4304b2e3-9359-4caf-94dd-1e31716fee56\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2vnwm" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.685872 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prjn9" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.692601 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4kh7\" (UniqueName: \"kubernetes.io/projected/6e802822-9935-46de-947b-c77bf8da4f9e-kube-api-access-z4kh7\") pod \"machine-config-controller-84d6567774-rknc7\" (UID: \"6e802822-9935-46de-947b-c77bf8da4f9e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rknc7" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.702952 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r78j\" (UniqueName: \"kubernetes.io/projected/d7707d7a-bfb7-4600-98f4-be607d9e77f4-kube-api-access-9r78j\") pod \"packageserver-d55dfcdfc-rfbk5\" (UID: \"d7707d7a-bfb7-4600-98f4-be607d9e77f4\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.708263 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.717200 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-2lgz4" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.720408 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: E0123 14:06:41.721218 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:42.221199825 +0000 UTC m=+149.216028555 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.721928 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwmqb\" (UniqueName: \"kubernetes.io/projected/f73c288c-acf3-4ce7-81c7-63953b2fc087-kube-api-access-pwmqb\") pod \"service-ca-9c57cc56f-btttg\" (UID: \"f73c288c-acf3-4ce7-81c7-63953b2fc087\") " pod="openshift-service-ca/service-ca-9c57cc56f-btttg" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.735594 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6fmtx" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.739367 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv4f7\" (UniqueName: \"kubernetes.io/projected/09c7da5e-ce0a-4a3c-9419-420f63f93f0e-kube-api-access-cv4f7\") pod \"machine-config-server-kmqrn\" (UID: \"09c7da5e-ce0a-4a3c-9419-420f63f93f0e\") " pod="openshift-machine-config-operator/machine-config-server-kmqrn" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.763679 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8w6h\" (UniqueName: \"kubernetes.io/projected/a925ae96-5ea9-4dba-9fbf-2ec5f5295026-kube-api-access-w8w6h\") pod \"service-ca-operator-777779d784-fmbdl\" (UID: \"a925ae96-5ea9-4dba-9fbf-2ec5f5295026\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmbdl" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.777344 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-65w5f" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.784152 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7pvl\" (UniqueName: \"kubernetes.io/projected/53bbb237-ded5-402c-9bc3-a1cda18e8cfb-kube-api-access-j7pvl\") pod \"package-server-manager-789f6589d5-lssd6\" (UID: \"53bbb237-ded5-402c-9bc3-a1cda18e8cfb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lssd6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.794722 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rknc7" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.800648 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wxmb\" (UniqueName: \"kubernetes.io/projected/d5dfee7e-59a9-43b1-bd2e-f3200ea5322c-kube-api-access-5wxmb\") pod \"dns-operator-744455d44c-f7z9k\" (UID: \"d5dfee7e-59a9-43b1-bd2e-f3200ea5322c\") " pod="openshift-dns-operator/dns-operator-744455d44c-f7z9k" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.804522 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2vnwm" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.822834 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kmqrn" Jan 23 14:06:41 crc kubenswrapper[4775]: E0123 14:06:41.823331 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:42.323300894 +0000 UTC m=+149.318129634 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.823241 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.824738 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:41 crc kubenswrapper[4775]: E0123 14:06:41.825529 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:42.32551045 +0000 UTC m=+149.320339190 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.826159 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mm7b2" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.835871 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxdqs\" (UniqueName: \"kubernetes.io/projected/924bd720-98da-4f7b-afbc-a7bfa822368f-kube-api-access-kxdqs\") pod \"ingress-canary-m5nll\" (UID: \"924bd720-98da-4f7b-afbc-a7bfa822368f\") " pod="openshift-ingress-canary/ingress-canary-m5nll" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.841217 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-br76j" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.848788 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/90f0ee56-8c51-4a42-ae4e-385ff7453aa7-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-xhzp8\" (UID: \"90f0ee56-8c51-4a42-ae4e-385ff7453aa7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhzp8" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.860837 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.874190 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm2g2\" (UniqueName: \"kubernetes.io/projected/88ecf2d3-bdec-4fe8-a567-44550e85bb19-kube-api-access-wm2g2\") pod \"kube-storage-version-migrator-operator-b67b599dd-5tss4\" (UID: \"88ecf2d3-bdec-4fe8-a567-44550e85bb19\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tss4" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.878413 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-btttg" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.881497 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmxm2\" (UniqueName: \"kubernetes.io/projected/6d1f9f7b-5676-4445-b8ec-1288e6beff20-kube-api-access-vmxm2\") pod \"dns-default-bvqqf\" (UID: \"6d1f9f7b-5676-4445-b8ec-1288e6beff20\") " pod="openshift-dns/dns-default-bvqqf" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.892299 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-m5nll" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.903584 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-bvqqf" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.906759 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2ad73212-43f6-49db-a38b-678185cbe9d4-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-d74p6\" (UID: \"2ad73212-43f6-49db-a38b-678185cbe9d4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d74p6" Jan 23 14:06:41 crc kubenswrapper[4775]: I0123 14:06:41.937425 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:41 crc kubenswrapper[4775]: E0123 14:06:41.938182 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:42.438158254 +0000 UTC m=+149.432986994 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.028681 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-f7z9k" Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.038836 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:42 crc kubenswrapper[4775]: E0123 14:06:42.039254 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:42.539238703 +0000 UTC m=+149.534067443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.046558 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tss4" Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.056059 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmbdl" Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.063170 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lssd6" Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.083618 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhzp8" Jan 23 14:06:42 crc kubenswrapper[4775]: W0123 14:06:42.085595 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09c7da5e_ce0a_4a3c_9419_420f63f93f0e.slice/crio-d271a3a0cdff5735151ee441e801b7cc05ab47a74e03b6fe6f80fff57ca77bb4 WatchSource:0}: Error finding container d271a3a0cdff5735151ee441e801b7cc05ab47a74e03b6fe6f80fff57ca77bb4: Status 404 returned error can't find the container with id d271a3a0cdff5735151ee441e801b7cc05ab47a74e03b6fe6f80fff57ca77bb4 Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.120555 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-psxgx"] Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.132694 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d74p6" Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.134811 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-bjb9d"] Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.136526 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-mvqcg"] Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.140730 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:42 crc kubenswrapper[4775]: E0123 14:06:42.140963 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:42.6409483 +0000 UTC m=+149.635777030 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.141026 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:42 crc kubenswrapper[4775]: E0123 14:06:42.141250 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:42.641243429 +0000 UTC m=+149.636072159 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:42 crc kubenswrapper[4775]: W0123 14:06:42.238080 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc6b05de_2295_4c6a_8f11_367da8bdcf00.slice/crio-7bb3883d76a0a8deb73c6f02e0be245caa774962528a91f583e000e2b4726ad6 WatchSource:0}: Error finding container 7bb3883d76a0a8deb73c6f02e0be245caa774962528a91f583e000e2b4726ad6: Status 404 returned error can't find the container with id 7bb3883d76a0a8deb73c6f02e0be245caa774962528a91f583e000e2b4726ad6 Jan 23 14:06:42 crc kubenswrapper[4775]: W0123 14:06:42.240279 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ba1b8ce_8332_45c9_bfb0_9a1842dea009.slice/crio-18f80c38635d9ae89f00b09f96ca84b976927c6fd597f63188353d171f52b648 WatchSource:0}: Error finding container 18f80c38635d9ae89f00b09f96ca84b976927c6fd597f63188353d171f52b648: Status 404 returned error can't find the container with id 18f80c38635d9ae89f00b09f96ca84b976927c6fd597f63188353d171f52b648 Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.241908 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:42 crc kubenswrapper[4775]: E0123 14:06:42.242310 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:42.742288797 +0000 UTC m=+149.737117547 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.345455 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:42 crc kubenswrapper[4775]: E0123 14:06:42.345826 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:42.845796568 +0000 UTC m=+149.840625308 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.453873 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:42 crc kubenswrapper[4775]: E0123 14:06:42.454078 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:42.954064451 +0000 UTC m=+149.948893191 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.556568 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:42 crc kubenswrapper[4775]: E0123 14:06:42.559498 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:43.059483599 +0000 UTC m=+150.054312339 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.580199 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ddqcf" event={"ID":"a5c75370-d1c6-43bd-a8e8-8836ea5bdb22","Type":"ContainerStarted","Data":"f94dbbf973341affd2538b3cbc434d7ad4fd81a5deaa125003de6b72b9911054"} Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.580247 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ddqcf" event={"ID":"a5c75370-d1c6-43bd-a8e8-8836ea5bdb22","Type":"ContainerStarted","Data":"5d0a45f59c61e2597b3665b4093c4210d42c5109a22dae499d34d976d5711df6"} Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.586372 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" event={"ID":"cc6b05de-2295-4c6a-8f11-367da8bdcf00","Type":"ContainerStarted","Data":"7bb3883d76a0a8deb73c6f02e0be245caa774962528a91f583e000e2b4726ad6"} Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.593366 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" event={"ID":"f38f7554-61cc-493f-8705-8da5f91d3926","Type":"ContainerStarted","Data":"f894849e32473455829c7b40b313941acc7320d102611e6c3ac59b3ced619c0a"} Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.593411 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" event={"ID":"f38f7554-61cc-493f-8705-8da5f91d3926","Type":"ContainerStarted","Data":"5b0d03f1967a283269994ccf5b1a0dd0a5943d80f6fb4af0aca71021f9919b58"} Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.596547 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-psxgx" event={"ID":"13e16abe-9325-4638-8b20-7195b7af8e68","Type":"ContainerStarted","Data":"33584a8eb34271542645243cdb07bed0b6f45ebf9838dc8cddc231e46573e88b"} Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.605814 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kmqrn" event={"ID":"09c7da5e-ce0a-4a3c-9419-420f63f93f0e","Type":"ContainerStarted","Data":"fea98ee07609f5547e776eb1de5619607f5763636cff987d5948eb9e56e2b31b"} Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.605869 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kmqrn" event={"ID":"09c7da5e-ce0a-4a3c-9419-420f63f93f0e","Type":"ContainerStarted","Data":"d271a3a0cdff5735151ee441e801b7cc05ab47a74e03b6fe6f80fff57ca77bb4"} Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.627915 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" event={"ID":"3066d31d-92a4-45a7-b368-ba66d5689456","Type":"ContainerStarted","Data":"b55e2c335cddf1f1e9c9202e83c490ce85712c353fa0cf36a620dab99d97659f"} Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.628988 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.648617 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbzw5" event={"ID":"216b36e4-0e40-4073-9432-d1977dc6e03a","Type":"ContainerStarted","Data":"8f41307c72e8249a81cfaf681b76ff654ea059bcc3caa121afee462f92bd4f8f"} Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.648703 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbzw5" event={"ID":"216b36e4-0e40-4073-9432-d1977dc6e03a","Type":"ContainerStarted","Data":"c6ba06eea1caea63b9c1618708d0058fd8c2a9908da60ea31992e6ace6fc83d0"} Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.659879 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-mvqcg" event={"ID":"8ba1b8ce-8332-45c9-bfb0-9a1842dea009","Type":"ContainerStarted","Data":"5750ec86d3204a228dcf0783fbf4c9551f8adee39c18a43ac4f08c4129127cdd"} Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.659928 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-mvqcg" event={"ID":"8ba1b8ce-8332-45c9-bfb0-9a1842dea009","Type":"ContainerStarted","Data":"18f80c38635d9ae89f00b09f96ca84b976927c6fd597f63188353d171f52b648"} Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.660923 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-mvqcg" Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.661458 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:42 crc kubenswrapper[4775]: E0123 14:06:42.663352 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:43.16331911 +0000 UTC m=+150.158147850 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.695576 4775 patch_prober.go:28] interesting pod/downloads-7954f5f757-mvqcg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.695632 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mvqcg" podUID="8ba1b8ce-8332-45c9-bfb0-9a1842dea009" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.697930 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-nj2dd" event={"ID":"381c20f8-ed2d-4aa8-b99b-5d85a6eb5526","Type":"ContainerStarted","Data":"d03c22578342edac43ef977c92563c16d4c79bf02047a1d0aeeeb99dc0d0b938"} Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.697985 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-nj2dd" event={"ID":"381c20f8-ed2d-4aa8-b99b-5d85a6eb5526","Type":"ContainerStarted","Data":"253ebed9b658348ffac0701fabd0fcd44d419d9faec3b842712d6efafb3de24f"} Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.764262 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:42 crc kubenswrapper[4775]: E0123 14:06:42.767136 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:43.26711923 +0000 UTC m=+150.261947980 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.865041 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:42 crc kubenswrapper[4775]: E0123 14:06:42.865169 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:43.365142157 +0000 UTC m=+150.359970897 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.865282 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:42 crc kubenswrapper[4775]: E0123 14:06:42.865576 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:43.36556517 +0000 UTC m=+150.360394020 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.968330 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:42 crc kubenswrapper[4775]: E0123 14:06:42.968795 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:43.468775111 +0000 UTC m=+150.463603861 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.980536 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pmcq8"] Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.980696 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-svb79"] Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.985399 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc9bh"] Jan 23 14:06:42 crc kubenswrapper[4775]: W0123 14:06:42.990893 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6995952d_6d8a_494d_842c_1d5cf9ee1207.slice/crio-7c10d0aa8c8e64d5633a1133583a17d88ad82ec55e1dccd0e66c17926ecbf30f WatchSource:0}: Error finding container 7c10d0aa8c8e64d5633a1133583a17d88ad82ec55e1dccd0e66c17926ecbf30f: Status 404 returned error can't find the container with id 7c10d0aa8c8e64d5633a1133583a17d88ad82ec55e1dccd0e66c17926ecbf30f Jan 23 14:06:42 crc kubenswrapper[4775]: I0123 14:06:42.993695 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-v2bx4"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.011316 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qnhrq"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.038653 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-7gqzl"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.052990 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-mc4h4"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.071600 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:43 crc kubenswrapper[4775]: E0123 14:06:43.072002 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:43.571986683 +0000 UTC m=+150.566815423 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.072833 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-4dpv6"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.087492 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-fgb82"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.110332 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6fmtx"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.160891 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-prjn9"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.173269 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:43 crc kubenswrapper[4775]: E0123 14:06:43.173582 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:43.673566397 +0000 UTC m=+150.668395137 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.173716 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-xpzqz"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.173741 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.174889 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-2lgz4"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.177054 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.186703 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf"] Jan 23 14:06:43 crc kubenswrapper[4775]: W0123 14:06:43.193221 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1680ee1_e1af_4c87_b9d9_d29e2b0a5043.slice/crio-c48a040b1f1c8981b8aafbaab5be57849f54cf1f36d3b282689fc0eb2ddc7c94 WatchSource:0}: Error finding container c48a040b1f1c8981b8aafbaab5be57849f54cf1f36d3b282689fc0eb2ddc7c94: Status 404 returned error can't find the container with id c48a040b1f1c8981b8aafbaab5be57849f54cf1f36d3b282689fc0eb2ddc7c94 Jan 23 14:06:43 crc kubenswrapper[4775]: W0123 14:06:43.223785 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d6b6f17_bb56_49ba_8487_6e07346780a1.slice/crio-87bcaa2b52f967df4d7cb67d7c4f5117d6253d2482ec76ad6ef22eaa91c61737 WatchSource:0}: Error finding container 87bcaa2b52f967df4d7cb67d7c4f5117d6253d2482ec76ad6ef22eaa91c61737: Status 404 returned error can't find the container with id 87bcaa2b52f967df4d7cb67d7c4f5117d6253d2482ec76ad6ef22eaa91c61737 Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.272260 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-65w5f"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.276189 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:43 crc kubenswrapper[4775]: E0123 14:06:43.276464 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:43.77645303 +0000 UTC m=+150.771281770 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.278453 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d74p6"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.320127 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-c9x8w"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.336561 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-f7z9k"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.356381 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-m5nll"] Jan 23 14:06:43 crc kubenswrapper[4775]: W0123 14:06:43.359661 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaaac7553_88f9_49bd_811f_e993ad0cd40d.slice/crio-5376a6ff674796a38546c913c7f44559d2a122040294b5bbb2a622468116412e WatchSource:0}: Error finding container 5376a6ff674796a38546c913c7f44559d2a122040294b5bbb2a622468116412e: Status 404 returned error can't find the container with id 5376a6ff674796a38546c913c7f44559d2a122040294b5bbb2a622468116412e Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.361454 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rknc7"] Jan 23 14:06:43 crc kubenswrapper[4775]: W0123 14:06:43.372102 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5dfee7e_59a9_43b1_bd2e_f3200ea5322c.slice/crio-321c081398890ce739ccec973358fba4a5d53864473eb1c957cd54300bb1cbcc WatchSource:0}: Error finding container 321c081398890ce739ccec973358fba4a5d53864473eb1c957cd54300bb1cbcc: Status 404 returned error can't find the container with id 321c081398890ce739ccec973358fba4a5d53864473eb1c957cd54300bb1cbcc Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.373210 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.377040 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:43 crc kubenswrapper[4775]: E0123 14:06:43.377275 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:43.87725942 +0000 UTC m=+150.872088160 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.382095 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-kmqrn" podStartSLOduration=5.382076894 podStartE2EDuration="5.382076894s" podCreationTimestamp="2026-01-23 14:06:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:43.369436956 +0000 UTC m=+150.364265696" watchObservedRunningTime="2026-01-23 14:06:43.382076894 +0000 UTC m=+150.376905634" Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.382584 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mm7b2"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.411883 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" podStartSLOduration=127.411864933 podStartE2EDuration="2m7.411864933s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:43.410264546 +0000 UTC m=+150.405093286" watchObservedRunningTime="2026-01-23 14:06:43.411864933 +0000 UTC m=+150.406693673" Jan 23 14:06:43 crc kubenswrapper[4775]: W0123 14:06:43.436488 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f5d381d_3a9d_4ba4_85fb_e9008e359729.slice/crio-e595c090e2bb8998e697905db90181d506f0b2b2dad59890adc39b1bd0afb2b9 WatchSource:0}: Error finding container e595c090e2bb8998e697905db90181d506f0b2b2dad59890adc39b1bd0afb2b9: Status 404 returned error can't find the container with id e595c090e2bb8998e697905db90181d506f0b2b2dad59890adc39b1bd0afb2b9 Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.455971 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-fmbdl"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.478359 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:43 crc kubenswrapper[4775]: E0123 14:06:43.478648 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:43.978635537 +0000 UTC m=+150.973464277 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.554139 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-nj2dd" podStartSLOduration=127.554119942 podStartE2EDuration="2m7.554119942s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:43.550781002 +0000 UTC m=+150.545609742" watchObservedRunningTime="2026-01-23 14:06:43.554119942 +0000 UTC m=+150.548948682" Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.581778 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:43 crc kubenswrapper[4775]: E0123 14:06:43.582191 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:44.08217604 +0000 UTC m=+151.077004780 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.587708 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-ddqcf" podStartSLOduration=127.587690564 podStartE2EDuration="2m7.587690564s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:43.582466028 +0000 UTC m=+150.577294768" watchObservedRunningTime="2026-01-23 14:06:43.587690564 +0000 UTC m=+150.582519294" Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.587930 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2vnwm"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.597223 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-btttg"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.619356 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lssd6"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.623552 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-mvqcg" podStartSLOduration=127.623511244 podStartE2EDuration="2m7.623511244s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:43.609152665 +0000 UTC m=+150.603981405" watchObservedRunningTime="2026-01-23 14:06:43.623511244 +0000 UTC m=+150.618339994" Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.639488 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-bvqqf"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.646792 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-nj2dd" Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.656051 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-577dd" podStartSLOduration=127.656009374 podStartE2EDuration="2m7.656009374s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:43.631817132 +0000 UTC m=+150.626645882" watchObservedRunningTime="2026-01-23 14:06:43.656009374 +0000 UTC m=+150.650838114" Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.671959 4775 patch_prober.go:28] interesting pod/router-default-5444994796-nj2dd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 14:06:43 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Jan 23 14:06:43 crc kubenswrapper[4775]: [+]process-running ok Jan 23 14:06:43 crc kubenswrapper[4775]: healthz check failed Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.672023 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nj2dd" podUID="381c20f8-ed2d-4aa8-b99b-5d85a6eb5526" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.674495 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-zbzw5" podStartSLOduration=127.674477276 podStartE2EDuration="2m7.674477276s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:43.673145896 +0000 UTC m=+150.667974636" watchObservedRunningTime="2026-01-23 14:06:43.674477276 +0000 UTC m=+150.669306016" Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.682879 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:43 crc kubenswrapper[4775]: E0123 14:06:43.687928 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:44.187910067 +0000 UTC m=+151.182738807 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.691194 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhzp8"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.719931 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.781479 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tss4"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.784135 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-br76j"] Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.784476 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:43 crc kubenswrapper[4775]: E0123 14:06:43.784758 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:44.284737269 +0000 UTC m=+151.279566009 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.790452 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2vnwm" event={"ID":"4304b2e3-9359-4caf-94dd-1e31716fee56","Type":"ContainerStarted","Data":"7ace852458ef3812ba419dd36db84c561e9ef2c87135369e7ac066c596e8234d"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.792878 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-65w5f" event={"ID":"384fd47a-81d2-4219-8a66-fbeec5bae860","Type":"ContainerStarted","Data":"639ace8e745ba46a24dfc49b1c972ca70c3d991b11b131d7fe7adac9a53d8663"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.808690 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" event={"ID":"c575b767-e334-406f-849d-e562d70985fd","Type":"ContainerStarted","Data":"783601f2459ac0ca5a884fc5bd0420d2b8d2891d3547354b43ae23ef08178d0c"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.817384 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-svb79" event={"ID":"85a9044b-9089-4a6a-87e6-06372c531aa9","Type":"ContainerStarted","Data":"f86a0f102d54f6d33929d8d65d55921f6232664bd9afa670a151241e47d9a59e"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.817438 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-svb79" event={"ID":"85a9044b-9089-4a6a-87e6-06372c531aa9","Type":"ContainerStarted","Data":"5b9f2f71532f723f9a1eaffdd6ca3478934eb1a9d3de769264a6371bf8165faa"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.824314 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b" event={"ID":"2d6b6f17-bb56-49ba-8487-6e07346780a1","Type":"ContainerStarted","Data":"87bcaa2b52f967df4d7cb67d7c4f5117d6253d2482ec76ad6ef22eaa91c61737"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.833126 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lssd6" event={"ID":"53bbb237-ded5-402c-9bc3-a1cda18e8cfb","Type":"ContainerStarted","Data":"3b278d30f0743f4dd565a6dbd3d122f825348b8dc8e4879597d6448c00e8fa52"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.834290 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qnhrq" event={"ID":"549e54fa-53eb-4a9d-9578-5cfbd02bb28d","Type":"ContainerStarted","Data":"6530997e5286a83d4549d5bf7514360e30ecb9d8d39bfd63a0f6c277f500f34c"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.834308 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qnhrq" event={"ID":"549e54fa-53eb-4a9d-9578-5cfbd02bb28d","Type":"ContainerStarted","Data":"886b09d921c71643ce311527caaa30a8c2770755e19f3690687807ae42b0c192"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.835614 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-f7z9k" event={"ID":"d5dfee7e-59a9-43b1-bd2e-f3200ea5322c","Type":"ContainerStarted","Data":"321c081398890ce739ccec973358fba4a5d53864473eb1c957cd54300bb1cbcc"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.837780 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4dpv6" event={"ID":"c7fae259-48f4-4d23-8685-6440a5246423","Type":"ContainerStarted","Data":"004f91f5be711dcadb27a7ce5a8e5de59588c53fc1f21044e803235c16b3edf4"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.837826 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4dpv6" event={"ID":"c7fae259-48f4-4d23-8685-6440a5246423","Type":"ContainerStarted","Data":"b9c62366623dfd8a2b8c616f1243b39a22f93f3e949fbe7e19d4fed3faa7b230"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.849495 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" event={"ID":"f9750de6-fc79-440e-8ad4-07acbe4edb49","Type":"ContainerStarted","Data":"a25b9ed2fe7fe9b82a067f014d9b57d4d05a554e8f8f383aecd06916c3d9fbc7"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.853441 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xpzqz" event={"ID":"dbaf4876-b99e-4096-9f36-5c888312ddab","Type":"ContainerStarted","Data":"62636a249346418acceb0e0644d57d1425845da4d462082b136a17efa80927bf"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.860115 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mm7b2" event={"ID":"0f5d381d-3a9d-4ba4-85fb-e9008e359729","Type":"ContainerStarted","Data":"e595c090e2bb8998e697905db90181d506f0b2b2dad59890adc39b1bd0afb2b9"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.868564 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-btttg" event={"ID":"f73c288c-acf3-4ce7-81c7-63953b2fc087","Type":"ContainerStarted","Data":"477ef4f6069365949577cf970eccbfb2d9b7d3ef16917b0e154e5b65def390c9"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.872778 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6fmtx" event={"ID":"8a2f579b-0f13-47dd-9566-dd57100ab22a","Type":"ContainerStarted","Data":"ebabb4649f38ec987dc44cce66ab98605c75a09267b77e56bccdac83686fdf45"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.880813 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" event={"ID":"aaac7553-88f9-49bd-811f-e993ad0cd40d","Type":"ContainerStarted","Data":"5376a6ff674796a38546c913c7f44559d2a122040294b5bbb2a622468116412e"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.886252 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:43 crc kubenswrapper[4775]: E0123 14:06:43.886579 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:44.3865664 +0000 UTC m=+151.381395140 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.887284 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-m5nll" event={"ID":"924bd720-98da-4f7b-afbc-a7bfa822368f","Type":"ContainerStarted","Data":"162fa534077513c17bd67067ada7c374dd8a402911c1d8ec09f73ab9b9ad96ab"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.894651 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d74p6" event={"ID":"2ad73212-43f6-49db-a38b-678185cbe9d4","Type":"ContainerStarted","Data":"27385bef26a71fabd2cee54731985e6e9f6add72ddba47686bab731ae015c209"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.910054 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" event={"ID":"cc6b05de-2295-4c6a-8f11-367da8bdcf00","Type":"ContainerStarted","Data":"554f386f9ea3a922d1075f75cb987f87980f014544d89c8d19624362f1eb02ce"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.917073 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rknc7" event={"ID":"6e802822-9935-46de-947b-c77bf8da4f9e","Type":"ContainerStarted","Data":"0d80e6b67e9f1268fd378b0d81dd590d53cf6773ef5207730afe777426e70b8d"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.932161 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmbdl" event={"ID":"a925ae96-5ea9-4dba-9fbf-2ec5f5295026","Type":"ContainerStarted","Data":"f40e82999acc6a1c646bbd5afcf2ea2228d735f9af02314ff6b7711368b69212"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.938685 4775 csr.go:261] certificate signing request csr-zrpmh is approved, waiting to be issued Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.945392 4775 csr.go:257] certificate signing request csr-zrpmh is issued Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.953911 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-7gqzl" event={"ID":"ba896a24-e6f2-4480-807b-b3c5b6232cea","Type":"ContainerStarted","Data":"f771e3a0545124f9aff5df19948a12cf28d603c121f483e5eeca5e318e63c454"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.954265 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-7gqzl" Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.957957 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" event={"ID":"1f3aab1c-726d-4027-b629-e04916bc4f8b","Type":"ContainerStarted","Data":"1976824d0d7581f25778cade1ceabbaefa46516e629ce58d32cb2d84aec22a6a"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.958046 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" event={"ID":"1f3aab1c-726d-4027-b629-e04916bc4f8b","Type":"ContainerStarted","Data":"c804f2807463870f94ca39d16cb9e5b2566a2fdc9148b1292a1636387b79edff"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.958837 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.960071 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-2lgz4" event={"ID":"b2e6a5f5-108e-4832-8036-58e1228a7f4f","Type":"ContainerStarted","Data":"eb3f4de1b3b2b446a6c1c5c2b453d25bb2f07abbbaa38dc068fe9c80edd97628"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.961753 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" event={"ID":"8ac48e42-bde7-4701-b994-825906603b06","Type":"ContainerStarted","Data":"f51d1a8b2d530002962d11af10b4a9dc9403d48b6849c26ac64175b119f21f51"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.961782 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" event={"ID":"8ac48e42-bde7-4701-b994-825906603b06","Type":"ContainerStarted","Data":"14f4d6283aff6de605f724a865763d27a0a448211bbacd5d102fb5562e6f44ef"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.962492 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.963741 4775 patch_prober.go:28] interesting pod/console-operator-58897d9998-7gqzl container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.963825 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-7gqzl" podUID="ba896a24-e6f2-4480-807b-b3c5b6232cea" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.963894 4775 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-v2bx4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.963925 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" podUID="1f3aab1c-726d-4027-b629-e04916bc4f8b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.964523 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fgb82" event={"ID":"a6821f92-2d15-4dc0-92ed-7a30cef98db9","Type":"ContainerStarted","Data":"ef54fd5e26cacb272f1e1be9cfe28c0c931df15d597bb7da81a47734c646362b"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.966172 4775 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-pmcq8 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.966226 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" podUID="8ac48e42-bde7-4701-b994-825906603b06" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.969641 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-psxgx" event={"ID":"13e16abe-9325-4638-8b20-7195b7af8e68","Type":"ContainerStarted","Data":"cfdaa792dd53dd9618e6fd6cc1a7572ca8ac417bc5bfaedaf956ce88910394a2"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.973320 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc9bh" event={"ID":"6995952d-6d8a-494d-842c-1d5cf9ee1207","Type":"ContainerStarted","Data":"78954533eb3567fbf192793631faefaf0c03ec3d3d29c16c6d6c22000f8c91f2"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.973353 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc9bh" event={"ID":"6995952d-6d8a-494d-842c-1d5cf9ee1207","Type":"ContainerStarted","Data":"7c10d0aa8c8e64d5633a1133583a17d88ad82ec55e1dccd0e66c17926ecbf30f"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.979149 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" event={"ID":"a9a77e3c-0e93-45f9-ab81-7dfbd2916588","Type":"ContainerStarted","Data":"126d7f9344248499833b2fa9bffa79374396f9b7ca1fc1c07f0f0a3674655194"} Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.987138 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:43 crc kubenswrapper[4775]: E0123 14:06:43.987517 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:44.487493544 +0000 UTC m=+151.482322284 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:43 crc kubenswrapper[4775]: I0123 14:06:43.987857 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:43 crc kubenswrapper[4775]: E0123 14:06:43.992377 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:44.492356499 +0000 UTC m=+151.487185239 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:44 crc kubenswrapper[4775]: I0123 14:06:44.001243 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prjn9" event={"ID":"e1680ee1-e1af-4c87-b9d9-d29e2b0a5043","Type":"ContainerStarted","Data":"c48a040b1f1c8981b8aafbaab5be57849f54cf1f36d3b282689fc0eb2ddc7c94"} Jan 23 14:06:44 crc kubenswrapper[4775]: I0123 14:06:44.001651 4775 patch_prober.go:28] interesting pod/downloads-7954f5f757-mvqcg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 23 14:06:44 crc kubenswrapper[4775]: I0123 14:06:44.001692 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mvqcg" podUID="8ba1b8ce-8332-45c9-bfb0-9a1842dea009" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 23 14:06:44 crc kubenswrapper[4775]: I0123 14:06:44.093617 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:44 crc kubenswrapper[4775]: E0123 14:06:44.096445 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:44.596425347 +0000 UTC m=+151.591254247 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:44 crc kubenswrapper[4775]: I0123 14:06:44.198156 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:44 crc kubenswrapper[4775]: E0123 14:06:44.198500 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:44.698487415 +0000 UTC m=+151.693316155 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:44 crc kubenswrapper[4775]: I0123 14:06:44.300134 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:44 crc kubenswrapper[4775]: E0123 14:06:44.300841 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:44.800823721 +0000 UTC m=+151.795652461 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:44 crc kubenswrapper[4775]: I0123 14:06:44.401967 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:44 crc kubenswrapper[4775]: E0123 14:06:44.406150 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:44.906130056 +0000 UTC m=+151.900958796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:44 crc kubenswrapper[4775]: I0123 14:06:44.503418 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:44 crc kubenswrapper[4775]: E0123 14:06:44.503767 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:45.003746201 +0000 UTC m=+151.998574941 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:44 crc kubenswrapper[4775]: I0123 14:06:44.605353 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:44 crc kubenswrapper[4775]: E0123 14:06:44.605977 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:45.105960994 +0000 UTC m=+152.100789734 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:44 crc kubenswrapper[4775]: I0123 14:06:44.659007 4775 patch_prober.go:28] interesting pod/router-default-5444994796-nj2dd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 14:06:44 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Jan 23 14:06:44 crc kubenswrapper[4775]: [+]process-running ok Jan 23 14:06:44 crc kubenswrapper[4775]: healthz check failed Jan 23 14:06:44 crc kubenswrapper[4775]: I0123 14:06:44.659049 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nj2dd" podUID="381c20f8-ed2d-4aa8-b99b-5d85a6eb5526" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 14:06:44 crc kubenswrapper[4775]: I0123 14:06:44.710634 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:44 crc kubenswrapper[4775]: E0123 14:06:44.710955 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:45.210939119 +0000 UTC m=+152.205767859 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:44 crc kubenswrapper[4775]: I0123 14:06:44.711116 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:44 crc kubenswrapper[4775]: E0123 14:06:44.711561 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:45.211553147 +0000 UTC m=+152.206381887 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:44 crc kubenswrapper[4775]: I0123 14:06:44.811736 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:44 crc kubenswrapper[4775]: E0123 14:06:44.812052 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:45.312033158 +0000 UTC m=+152.306861898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:44 crc kubenswrapper[4775]: I0123 14:06:44.907643 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-qnhrq" podStartSLOduration=128.907627652 podStartE2EDuration="2m8.907627652s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:44.904730336 +0000 UTC m=+151.899559076" watchObservedRunningTime="2026-01-23 14:06:44.907627652 +0000 UTC m=+151.902456392" Jan 23 14:06:44 crc kubenswrapper[4775]: I0123 14:06:44.912653 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:44 crc kubenswrapper[4775]: E0123 14:06:44.913007 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:45.412993193 +0000 UTC m=+152.407821933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:44 crc kubenswrapper[4775]: I0123 14:06:44.948352 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-23 14:01:43 +0000 UTC, rotation deadline is 2026-11-22 05:07:29.151809769 +0000 UTC Jan 23 14:06:44 crc kubenswrapper[4775]: I0123 14:06:44.948432 4775 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7263h0m44.203380448s for next certificate rotation Jan 23 14:06:44 crc kubenswrapper[4775]: I0123 14:06:44.962982 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-7gqzl" podStartSLOduration=128.962965575 podStartE2EDuration="2m8.962965575s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:44.960628075 +0000 UTC m=+151.955456825" watchObservedRunningTime="2026-01-23 14:06:44.962965575 +0000 UTC m=+151.957794315" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.021298 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:45 crc kubenswrapper[4775]: E0123 14:06:45.021634 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:45.521619007 +0000 UTC m=+152.516447747 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.032951 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-psxgx" podStartSLOduration=129.032936805 podStartE2EDuration="2m9.032936805s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:44.985194899 +0000 UTC m=+151.980023639" watchObservedRunningTime="2026-01-23 14:06:45.032936805 +0000 UTC m=+152.027765545" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.033195 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" podStartSLOduration=129.033191022 podStartE2EDuration="2m9.033191022s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:45.030253944 +0000 UTC m=+152.025082684" watchObservedRunningTime="2026-01-23 14:06:45.033191022 +0000 UTC m=+152.028019762" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.040829 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tss4" event={"ID":"88ecf2d3-bdec-4fe8-a567-44550e85bb19","Type":"ContainerStarted","Data":"be68934ceedd350fb3fd62ed0d8974ea06e81205df9940c9174d70e523757526"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.040867 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tss4" event={"ID":"88ecf2d3-bdec-4fe8-a567-44550e85bb19","Type":"ContainerStarted","Data":"d22527b7f6abd399b4e5da0e46745fb66ca5922f663f2fb29fa0c5b9c706c45a"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.049268 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5" event={"ID":"d7707d7a-bfb7-4600-98f4-be607d9e77f4","Type":"ContainerStarted","Data":"573d26feaa63963c83105249d1b5ec9688369844c41c6889cdddd83542400533"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.049308 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5" event={"ID":"d7707d7a-bfb7-4600-98f4-be607d9e77f4","Type":"ContainerStarted","Data":"cbf05869c4786c2bfe2dac571e3097dd06a662629a0eb18f501f6aefb8de7f66"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.050244 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.061644 4775 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-rfbk5 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.061706 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5" podUID="d7707d7a-bfb7-4600-98f4-be607d9e77f4" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.062178 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-svb79" event={"ID":"85a9044b-9089-4a6a-87e6-06372c531aa9","Type":"ContainerStarted","Data":"eab78b0bfbb1e2cebe092155c9772fd6a290b1c5a307cf780d4c398326697ba1"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.079017 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-7gqzl" event={"ID":"ba896a24-e6f2-4480-807b-b3c5b6232cea","Type":"ContainerStarted","Data":"0357894b6772ffb91ddd27f777486f5eb9f0d86be383f7b3a92aa98b9889165c"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.079723 4775 patch_prober.go:28] interesting pod/console-operator-58897d9998-7gqzl container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.079749 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-7gqzl" podUID="ba896a24-e6f2-4480-807b-b3c5b6232cea" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.104468 4775 generic.go:334] "Generic (PLEG): container finished" podID="f9750de6-fc79-440e-8ad4-07acbe4edb49" containerID="200b0617b8f8a4369cb8cc24a748ac72dde52270d577d79cddfd4b0d1ba88c77" exitCode=0 Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.104535 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" event={"ID":"f9750de6-fc79-440e-8ad4-07acbe4edb49","Type":"ContainerDied","Data":"200b0617b8f8a4369cb8cc24a748ac72dde52270d577d79cddfd4b0d1ba88c77"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.111080 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-bjb9d" podStartSLOduration=129.111065918 podStartE2EDuration="2m9.111065918s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:45.067832117 +0000 UTC m=+152.062660857" watchObservedRunningTime="2026-01-23 14:06:45.111065918 +0000 UTC m=+152.105894658" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.112070 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" podStartSLOduration=129.112064828 podStartE2EDuration="2m9.112064828s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:45.111312345 +0000 UTC m=+152.106141085" watchObservedRunningTime="2026-01-23 14:06:45.112064828 +0000 UTC m=+152.106893568" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.113937 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-bvqqf" event={"ID":"6d1f9f7b-5676-4445-b8ec-1288e6beff20","Type":"ContainerStarted","Data":"ac7d274e3b76addeb461e88a6e38422c130dcf74dc86a462564f79ccc6d24226"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.122450 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:45 crc kubenswrapper[4775]: E0123 14:06:45.122798 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:45.622783628 +0000 UTC m=+152.617612368 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.127828 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prjn9" event={"ID":"e1680ee1-e1af-4c87-b9d9-d29e2b0a5043","Type":"ContainerStarted","Data":"0a16f0a2d7252e8f056fd9b2124ef0f9eb64800c7655173b0553fc20d85d036e"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.128097 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prjn9" event={"ID":"e1680ee1-e1af-4c87-b9d9-d29e2b0a5043","Type":"ContainerStarted","Data":"e87a3cf924507e263fdf6fc6380f43067923478788b82dfed3f254fe4119ec0f"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.137720 4775 generic.go:334] "Generic (PLEG): container finished" podID="c7fae259-48f4-4d23-8685-6440a5246423" containerID="004f91f5be711dcadb27a7ce5a8e5de59588c53fc1f21044e803235c16b3edf4" exitCode=0 Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.137852 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4dpv6" event={"ID":"c7fae259-48f4-4d23-8685-6440a5246423","Type":"ContainerDied","Data":"004f91f5be711dcadb27a7ce5a8e5de59588c53fc1f21044e803235c16b3edf4"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.142019 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-gc9bh" podStartSLOduration=129.141998302 podStartE2EDuration="2m9.141998302s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:45.140520287 +0000 UTC m=+152.135349027" watchObservedRunningTime="2026-01-23 14:06:45.141998302 +0000 UTC m=+152.136827042" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.178616 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-65w5f" event={"ID":"384fd47a-81d2-4219-8a66-fbeec5bae860","Type":"ContainerStarted","Data":"5a4ee699255124b6c9de62ae3f6c63385a8d2c91c5782187dbc607d0534d6ca1"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.179385 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-65w5f" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.184775 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xpzqz" event={"ID":"dbaf4876-b99e-4096-9f36-5c888312ddab","Type":"ContainerStarted","Data":"e7bd6df12f07013d959edd287e01e0630e48c8a8430cefe74f71eac69a37ec9e"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.184844 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xpzqz" event={"ID":"dbaf4876-b99e-4096-9f36-5c888312ddab","Type":"ContainerStarted","Data":"79f891f2deda6d5c49955de71cdbf747e5162a9a62e11d7fb5d0d1490dd98771"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.184982 4775 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-65w5f container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.185027 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-65w5f" podUID="384fd47a-81d2-4219-8a66-fbeec5bae860" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.189864 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5" podStartSLOduration=129.189794779 podStartE2EDuration="2m9.189794779s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:45.183441869 +0000 UTC m=+152.178270609" watchObservedRunningTime="2026-01-23 14:06:45.189794779 +0000 UTC m=+152.184623519" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.219332 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-m5nll" event={"ID":"924bd720-98da-4f7b-afbc-a7bfa822368f","Type":"ContainerStarted","Data":"212dd351f81a598a8bd5dbf1dbca4465a9c1ebafd52cd7baffb3ad9b770b3a5a"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.223499 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:45 crc kubenswrapper[4775]: E0123 14:06:45.224613 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:45.724597738 +0000 UTC m=+152.719426478 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.225416 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b" event={"ID":"2d6b6f17-bb56-49ba-8487-6e07346780a1","Type":"ContainerStarted","Data":"bd180f88acb55bc6174b54cab0740792964b942d82c9bf0cffd2ac1751bececd"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.242303 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-2lgz4" event={"ID":"b2e6a5f5-108e-4832-8036-58e1228a7f4f","Type":"ContainerStarted","Data":"b3efc1f7717b5f270092d41d50747303298de4a30f116a9a55470729a7b0e1e9"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.259301 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" event={"ID":"a9a77e3c-0e93-45f9-ab81-7dfbd2916588","Type":"ContainerStarted","Data":"0180d579f234a3f26f7595abf341e660581404c07fa388dc580f716a183ffec5"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.260190 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.262642 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-prjn9" podStartSLOduration=129.262622994 podStartE2EDuration="2m9.262622994s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:45.259849971 +0000 UTC m=+152.254678711" watchObservedRunningTime="2026-01-23 14:06:45.262622994 +0000 UTC m=+152.257451734" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.265110 4775 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-lqcpn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.265177 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" podUID="a9a77e3c-0e93-45f9-ab81-7dfbd2916588" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.280843 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fgb82" event={"ID":"a6821f92-2d15-4dc0-92ed-7a30cef98db9","Type":"ContainerStarted","Data":"f4aaa0765a07f4839c71e2b2a303a3c0c625cc8d1414133eff523c9a0838b442"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.283348 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6fmtx" event={"ID":"8a2f579b-0f13-47dd-9566-dd57100ab22a","Type":"ContainerStarted","Data":"aca20b59d669fc8018e913059e0ac8734aa8244d21a04ce0eb4a91fadfbe1e4b"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.310404 4775 generic.go:334] "Generic (PLEG): container finished" podID="c575b767-e334-406f-849d-e562d70985fd" containerID="4ad8d1efee4bf79acffb5d566a3c125a4291d13446cc6a0749f1d14599861de0" exitCode=0 Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.310487 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" event={"ID":"c575b767-e334-406f-849d-e562d70985fd","Type":"ContainerDied","Data":"4ad8d1efee4bf79acffb5d566a3c125a4291d13446cc6a0749f1d14599861de0"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.316243 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-svb79" podStartSLOduration=129.316224105 podStartE2EDuration="2m9.316224105s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:45.310268607 +0000 UTC m=+152.305097347" watchObservedRunningTime="2026-01-23 14:06:45.316224105 +0000 UTC m=+152.311052845" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.337820 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:45 crc kubenswrapper[4775]: E0123 14:06:45.345758 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:45.845743206 +0000 UTC m=+152.840571946 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.353280 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-5tss4" podStartSLOduration=129.353266251 podStartE2EDuration="2m9.353266251s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:45.351703774 +0000 UTC m=+152.346532584" watchObservedRunningTime="2026-01-23 14:06:45.353266251 +0000 UTC m=+152.348094991" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.381448 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-br76j" event={"ID":"98e5fa0e-5fb3-4a38-bcdc-328a22d4460f","Type":"ContainerStarted","Data":"4163a4edaf0e033d5e364222a1a64ac6ccb4ad9e12f8f91e0be95e4a44ff6a02"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.381779 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-br76j" event={"ID":"98e5fa0e-5fb3-4a38-bcdc-328a22d4460f","Type":"ContainerStarted","Data":"483a051e7d1d0603d8c840c5df82a09845654c9edf0f450f8cfd1a455e73da79"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.386616 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" podStartSLOduration=129.386596546 podStartE2EDuration="2m9.386596546s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:45.38203399 +0000 UTC m=+152.376862730" watchObservedRunningTime="2026-01-23 14:06:45.386596546 +0000 UTC m=+152.381425286" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.387184 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mm7b2" event={"ID":"0f5d381d-3a9d-4ba4-85fb-e9008e359729","Type":"ContainerStarted","Data":"ef2a13aaab2e42e13f0a0b5435cc4064458a0380affaecdebb1304d206440206"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.395926 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhzp8" event={"ID":"90f0ee56-8c51-4a42-ae4e-385ff7453aa7","Type":"ContainerStarted","Data":"5755f1499d835cc80a9aa7263bf7fd543392d67a56715ef8a8aab24dfb9da1b1"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.404255 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lssd6" event={"ID":"53bbb237-ded5-402c-9bc3-a1cda18e8cfb","Type":"ContainerStarted","Data":"f274109f57aead4df1f63e3ed82694af2d2f87335f1d763e0cf3d8d62b18a663"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.404867 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lssd6" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.409552 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-btttg" event={"ID":"f73c288c-acf3-4ce7-81c7-63953b2fc087","Type":"ContainerStarted","Data":"fb6dfd5b49967c52406c64c6cd631e5d024adc2620f6107d8c183146c200f957"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.412578 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2vnwm" event={"ID":"4304b2e3-9359-4caf-94dd-1e31716fee56","Type":"ContainerStarted","Data":"571218eaeef0e770b801d198a59edd6b660899d8cc32b4fdd85615dd021aa7c6"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.412991 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2vnwm" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.414917 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmbdl" event={"ID":"a925ae96-5ea9-4dba-9fbf-2ec5f5295026","Type":"ContainerStarted","Data":"fb6178069a1a5eb139f802b984fc975874815dfe021944ef084a0e32cd5f3079"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.416048 4775 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-2vnwm container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.416088 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2vnwm" podUID="4304b2e3-9359-4caf-94dd-1e31716fee56" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.422015 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b" podStartSLOduration=129.422003424 podStartE2EDuration="2m9.422003424s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:45.420380115 +0000 UTC m=+152.415208855" watchObservedRunningTime="2026-01-23 14:06:45.422003424 +0000 UTC m=+152.416832164" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.428060 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d74p6" event={"ID":"2ad73212-43f6-49db-a38b-678185cbe9d4","Type":"ContainerStarted","Data":"f69e0599ca69e4d11df671bc24e8330f92e97bbe5a740bbbde353b49ed3b0a4a"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.447168 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:45 crc kubenswrapper[4775]: E0123 14:06:45.449102 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:45.949066502 +0000 UTC m=+152.943895242 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.465010 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rknc7" event={"ID":"6e802822-9935-46de-947b-c77bf8da4f9e","Type":"ContainerStarted","Data":"8c3c2436f2346ccfaa7c4d3c6c3378201bba5c8cabf7e5ee34c3cb1dd7676b40"} Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.469454 4775 patch_prober.go:28] interesting pod/downloads-7954f5f757-mvqcg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.469496 4775 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-v2bx4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.469454 4775 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-pmcq8 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.469519 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" podUID="8ac48e42-bde7-4701-b994-825906603b06" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.18:8080/healthz\": dial tcp 10.217.0.18:8080: connect: connection refused" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.469494 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mvqcg" podUID="8ba1b8ce-8332-45c9-bfb0-9a1842dea009" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.469518 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" podUID="1f3aab1c-726d-4027-b629-e04916bc4f8b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.470088 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-2lgz4" podStartSLOduration=129.470067649 podStartE2EDuration="2m9.470067649s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:45.463121001 +0000 UTC m=+152.457949741" watchObservedRunningTime="2026-01-23 14:06:45.470067649 +0000 UTC m=+152.464896389" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.544134 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-65w5f" podStartSLOduration=129.54411679 podStartE2EDuration="2m9.54411679s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:45.511106584 +0000 UTC m=+152.505935324" watchObservedRunningTime="2026-01-23 14:06:45.54411679 +0000 UTC m=+152.538945530" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.549873 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:45 crc kubenswrapper[4775]: E0123 14:06:45.554607 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:46.054595733 +0000 UTC m=+153.049424463 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.596518 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-6fmtx" podStartSLOduration=129.596489224 podStartE2EDuration="2m9.596489224s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:45.595840515 +0000 UTC m=+152.590669255" watchObservedRunningTime="2026-01-23 14:06:45.596489224 +0000 UTC m=+152.591317964" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.629438 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-m5nll" podStartSLOduration=7.629392447 podStartE2EDuration="7.629392447s" podCreationTimestamp="2026-01-23 14:06:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:45.628724927 +0000 UTC m=+152.623553667" watchObservedRunningTime="2026-01-23 14:06:45.629392447 +0000 UTC m=+152.624221187" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.652573 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:45 crc kubenswrapper[4775]: E0123 14:06:45.652970 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:46.152955741 +0000 UTC m=+153.147784471 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.661168 4775 patch_prober.go:28] interesting pod/router-default-5444994796-nj2dd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 14:06:45 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Jan 23 14:06:45 crc kubenswrapper[4775]: [+]process-running ok Jan 23 14:06:45 crc kubenswrapper[4775]: healthz check failed Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.661215 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nj2dd" podUID="381c20f8-ed2d-4aa8-b99b-5d85a6eb5526" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.676620 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-fgb82" podStartSLOduration=129.676603027 podStartE2EDuration="2m9.676603027s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:45.6740205 +0000 UTC m=+152.668849240" watchObservedRunningTime="2026-01-23 14:06:45.676603027 +0000 UTC m=+152.671431767" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.709490 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-xpzqz" podStartSLOduration=129.709476519 podStartE2EDuration="2m9.709476519s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:45.707954123 +0000 UTC m=+152.702782873" watchObservedRunningTime="2026-01-23 14:06:45.709476519 +0000 UTC m=+152.704305259" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.741207 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mm7b2" podStartSLOduration=129.741188916 podStartE2EDuration="2m9.741188916s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:45.741152654 +0000 UTC m=+152.735981394" watchObservedRunningTime="2026-01-23 14:06:45.741188916 +0000 UTC m=+152.736017656" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.756079 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:45 crc kubenswrapper[4775]: E0123 14:06:45.756562 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:46.256540364 +0000 UTC m=+153.251369104 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.779004 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-fmbdl" podStartSLOduration=129.778980274 podStartE2EDuration="2m9.778980274s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:45.777124479 +0000 UTC m=+152.771953239" watchObservedRunningTime="2026-01-23 14:06:45.778980274 +0000 UTC m=+152.773809014" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.825104 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rknc7" podStartSLOduration=129.825087591 podStartE2EDuration="2m9.825087591s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:45.822770322 +0000 UTC m=+152.817599062" watchObservedRunningTime="2026-01-23 14:06:45.825087591 +0000 UTC m=+152.819916331" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.857493 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:45 crc kubenswrapper[4775]: E0123 14:06:45.858087 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:46.358070956 +0000 UTC m=+153.352899686 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.868146 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-br76j" podStartSLOduration=129.868108366 podStartE2EDuration="2m9.868108366s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:45.866134187 +0000 UTC m=+152.860962927" watchObservedRunningTime="2026-01-23 14:06:45.868108366 +0000 UTC m=+152.862937096" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.959568 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:45 crc kubenswrapper[4775]: E0123 14:06:45.959899 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:46.459887597 +0000 UTC m=+153.454716337 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.977344 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-btttg" podStartSLOduration=129.977323037 podStartE2EDuration="2m9.977323037s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:45.974232975 +0000 UTC m=+152.969061725" watchObservedRunningTime="2026-01-23 14:06:45.977323037 +0000 UTC m=+152.972151777" Jan 23 14:06:45 crc kubenswrapper[4775]: I0123 14:06:45.977865 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2vnwm" podStartSLOduration=129.977857193 podStartE2EDuration="2m9.977857193s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:45.920615564 +0000 UTC m=+152.915444294" watchObservedRunningTime="2026-01-23 14:06:45.977857193 +0000 UTC m=+152.972685933" Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.050376 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-d74p6" podStartSLOduration=130.050362119 podStartE2EDuration="2m10.050362119s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:46.049500903 +0000 UTC m=+153.044329643" watchObservedRunningTime="2026-01-23 14:06:46.050362119 +0000 UTC m=+153.045190859" Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.051521 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lssd6" podStartSLOduration=130.051512393 podStartE2EDuration="2m10.051512393s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:46.017290481 +0000 UTC m=+153.012119221" watchObservedRunningTime="2026-01-23 14:06:46.051512393 +0000 UTC m=+153.046341133" Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.062325 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:46 crc kubenswrapper[4775]: E0123 14:06:46.062779 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:46.562759399 +0000 UTC m=+153.557588139 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.076735 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhzp8" podStartSLOduration=130.076718156 podStartE2EDuration="2m10.076718156s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:46.072883191 +0000 UTC m=+153.067711931" watchObservedRunningTime="2026-01-23 14:06:46.076718156 +0000 UTC m=+153.071546886" Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.164226 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:46 crc kubenswrapper[4775]: E0123 14:06:46.164585 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:46.664568229 +0000 UTC m=+153.659396969 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.265203 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:46 crc kubenswrapper[4775]: E0123 14:06:46.265576 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:46.765549265 +0000 UTC m=+153.760378005 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.366794 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:46 crc kubenswrapper[4775]: E0123 14:06:46.367239 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:46.867219121 +0000 UTC m=+153.862047951 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.467631 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:46 crc kubenswrapper[4775]: E0123 14:06:46.467947 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:46.967920538 +0000 UTC m=+153.962749278 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.481280 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4dpv6" event={"ID":"c7fae259-48f4-4d23-8685-6440a5246423","Type":"ContainerStarted","Data":"d7f784e0b78ae154a92ea6388c382889297f22e67565ca63df3c3516d80a564d"} Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.481549 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4dpv6" Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.484429 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" event={"ID":"aaac7553-88f9-49bd-811f-e993ad0cd40d","Type":"ContainerStarted","Data":"9207f4cc20df8acb8c3112286787b29fed970bb4df13c6df0c8107bf4ff986a5"} Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.486795 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-f7z9k" event={"ID":"d5dfee7e-59a9-43b1-bd2e-f3200ea5322c","Type":"ContainerStarted","Data":"a9be1eb821096367941d651ad0fbaff1e3e70493d07ab58ce51929d49783e20c"} Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.486833 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-f7z9k" event={"ID":"d5dfee7e-59a9-43b1-bd2e-f3200ea5322c","Type":"ContainerStarted","Data":"85f9026711c001686802c646a663af3e3985c550e6eb233b0ef2642f09febb26"} Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.489626 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" event={"ID":"f9750de6-fc79-440e-8ad4-07acbe4edb49","Type":"ContainerStarted","Data":"e0170441bae0e68b6e1dc341a7d7696cabd456807615f2fb845fd149c49668af"} Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.489651 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" event={"ID":"f9750de6-fc79-440e-8ad4-07acbe4edb49","Type":"ContainerStarted","Data":"2c6c5488e5e2dd8849ad30f9e952900c6d3bb3901eb047ea513f1d5f025751a7"} Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.491592 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-br76j" event={"ID":"98e5fa0e-5fb3-4a38-bcdc-328a22d4460f","Type":"ContainerStarted","Data":"c7b928bf2b3f7e1db847c894c2c5c621ac31be41796fd8cc889baaa3cba16c21"} Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.493668 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xhzp8" event={"ID":"90f0ee56-8c51-4a42-ae4e-385ff7453aa7","Type":"ContainerStarted","Data":"85e2d3553fb927925148170099f82cdfdca611b89af79ff5fcf8fe5091d8f0ef"} Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.496266 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-2lgz4" event={"ID":"b2e6a5f5-108e-4832-8036-58e1228a7f4f","Type":"ContainerStarted","Data":"031e2c94b6730404430e4745984c61804fdbb8da3944262ac0b02b0ba5d32aaf"} Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.499847 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-bvqqf" event={"ID":"6d1f9f7b-5676-4445-b8ec-1288e6beff20","Type":"ContainerStarted","Data":"f995c2ab0ef4c82987021f947525ce0f43184fb0566a3ea5cb3ec1e44655269a"} Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.499902 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-bvqqf" event={"ID":"6d1f9f7b-5676-4445-b8ec-1288e6beff20","Type":"ContainerStarted","Data":"376c7e111342068316b1a764195bd90aa01daf1eb6d8d2ab337bc21ec2589d46"} Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.500425 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-bvqqf" Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.502475 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lssd6" event={"ID":"53bbb237-ded5-402c-9bc3-a1cda18e8cfb","Type":"ContainerStarted","Data":"68d8b8c6b41b123576844085d332ab2c900d485c02003ae5d7de9da583809f10"} Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.504380 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rknc7" event={"ID":"6e802822-9935-46de-947b-c77bf8da4f9e","Type":"ContainerStarted","Data":"10c5831e1dc0b06cfffe4b21ff45c42157f053402a1e1ade4be36296023dfc50"} Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.507199 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" event={"ID":"c575b767-e334-406f-849d-e562d70985fd","Type":"ContainerStarted","Data":"d3516737a5109ef433d88489eb32b20fc6a9f40c17b89937c5af220085f560cb"} Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.517686 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-65w5f" Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.519370 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.572548 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:46 crc kubenswrapper[4775]: E0123 14:06:46.577598 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:47.077583462 +0000 UTC m=+154.072412202 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.589002 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4dpv6" podStartSLOduration=130.588982553 podStartE2EDuration="2m10.588982553s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:46.582063746 +0000 UTC m=+153.576892486" watchObservedRunningTime="2026-01-23 14:06:46.588982553 +0000 UTC m=+153.583811283" Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.637516 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-bvqqf" podStartSLOduration=8.637500932 podStartE2EDuration="8.637500932s" podCreationTimestamp="2026-01-23 14:06:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:46.634372808 +0000 UTC m=+153.629201548" watchObservedRunningTime="2026-01-23 14:06:46.637500932 +0000 UTC m=+153.632329672" Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.657107 4775 patch_prober.go:28] interesting pod/router-default-5444994796-nj2dd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 14:06:46 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Jan 23 14:06:46 crc kubenswrapper[4775]: [+]process-running ok Jan 23 14:06:46 crc kubenswrapper[4775]: healthz check failed Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.657526 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nj2dd" podUID="381c20f8-ed2d-4aa8-b99b-5d85a6eb5526" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.665756 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.681115 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:46 crc kubenswrapper[4775]: E0123 14:06:46.681291 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:47.181258428 +0000 UTC m=+154.176087168 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.681488 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:46 crc kubenswrapper[4775]: E0123 14:06:46.681819 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:47.181788634 +0000 UTC m=+154.176617374 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.686128 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-2vnwm" Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.734611 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" podStartSLOduration=130.734592591 podStartE2EDuration="2m10.734592591s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:46.709903574 +0000 UTC m=+153.704732324" watchObservedRunningTime="2026-01-23 14:06:46.734592591 +0000 UTC m=+153.729421331" Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.782084 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:46 crc kubenswrapper[4775]: E0123 14:06:46.782431 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:47.282410459 +0000 UTC m=+154.277239199 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.792430 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-f7z9k" podStartSLOduration=130.792410348 podStartE2EDuration="2m10.792410348s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:46.790075078 +0000 UTC m=+153.784903828" watchObservedRunningTime="2026-01-23 14:06:46.792410348 +0000 UTC m=+153.787239098" Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.883456 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:46 crc kubenswrapper[4775]: E0123 14:06:46.884158 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:47.384138247 +0000 UTC m=+154.378967087 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.894879 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" podStartSLOduration=130.894856777 podStartE2EDuration="2m10.894856777s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:46.864932674 +0000 UTC m=+153.859761424" watchObservedRunningTime="2026-01-23 14:06:46.894856777 +0000 UTC m=+153.889685517" Jan 23 14:06:46 crc kubenswrapper[4775]: I0123 14:06:46.984744 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:46 crc kubenswrapper[4775]: E0123 14:06:46.985085 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:47.485068701 +0000 UTC m=+154.479897441 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.086223 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:47 crc kubenswrapper[4775]: E0123 14:06:47.086582 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:47.586567102 +0000 UTC m=+154.581395832 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.125364 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-7gqzl" Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.187125 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:47 crc kubenswrapper[4775]: E0123 14:06:47.187481 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:47.687463006 +0000 UTC m=+154.682291746 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.284865 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-v2bx4"] Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.285199 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" podUID="1f3aab1c-726d-4027-b629-e04916bc4f8b" containerName="controller-manager" containerID="cri-o://1976824d0d7581f25778cade1ceabbaefa46516e629ce58d32cb2d84aec22a6a" gracePeriod=30 Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.288306 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:47 crc kubenswrapper[4775]: E0123 14:06:47.288593 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:47.788582915 +0000 UTC m=+154.783411655 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.328080 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.389449 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:47 crc kubenswrapper[4775]: E0123 14:06:47.389584 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:47.889566121 +0000 UTC m=+154.884394861 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.389608 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:47 crc kubenswrapper[4775]: E0123 14:06:47.389900 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:47.889891611 +0000 UTC m=+154.884720351 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.490666 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:47 crc kubenswrapper[4775]: E0123 14:06:47.490955 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:47.990941299 +0000 UTC m=+154.985770039 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.510739 4775 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-rfbk5 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.510815 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5" podUID="d7707d7a-bfb7-4600-98f4-be607d9e77f4" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.513551 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" event={"ID":"aaac7553-88f9-49bd-811f-e993ad0cd40d","Type":"ContainerStarted","Data":"6d5699cb0bae3a1b15b42aa9d1eddc4aa81cd5e62ea544ef8bf880646999fd08"} Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.513611 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" event={"ID":"aaac7553-88f9-49bd-811f-e993ad0cd40d","Type":"ContainerStarted","Data":"28a9405b9b29619c4d55a76d941051d6302e32cfb4060b64f3318e315e1fcc7a"} Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.517513 4775 generic.go:334] "Generic (PLEG): container finished" podID="1f3aab1c-726d-4027-b629-e04916bc4f8b" containerID="1976824d0d7581f25778cade1ceabbaefa46516e629ce58d32cb2d84aec22a6a" exitCode=0 Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.517660 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" event={"ID":"1f3aab1c-726d-4027-b629-e04916bc4f8b","Type":"ContainerDied","Data":"1976824d0d7581f25778cade1ceabbaefa46516e629ce58d32cb2d84aec22a6a"} Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.593393 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:47 crc kubenswrapper[4775]: E0123 14:06:47.593738 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:48.093723038 +0000 UTC m=+155.088551778 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.650317 4775 patch_prober.go:28] interesting pod/router-default-5444994796-nj2dd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 14:06:47 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Jan 23 14:06:47 crc kubenswrapper[4775]: [+]process-running ok Jan 23 14:06:47 crc kubenswrapper[4775]: healthz check failed Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.650384 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nj2dd" podUID="381c20f8-ed2d-4aa8-b99b-5d85a6eb5526" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.695555 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:47 crc kubenswrapper[4775]: E0123 14:06:47.695723 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:48.195696643 +0000 UTC m=+155.190525383 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.695765 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:47 crc kubenswrapper[4775]: E0123 14:06:47.696207 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:48.196199578 +0000 UTC m=+155.191028318 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.800992 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:47 crc kubenswrapper[4775]: E0123 14:06:47.801289 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:48.301273816 +0000 UTC m=+155.296102556 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.903045 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:47 crc kubenswrapper[4775]: E0123 14:06:47.903347 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:48.403336034 +0000 UTC m=+155.398164774 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.907944 4775 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.947722 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2q2jj"] Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.948689 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2q2jj" Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.950703 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.956751 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" Jan 23 14:06:47 crc kubenswrapper[4775]: I0123 14:06:47.964265 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rfbk5" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.004305 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcvf9\" (UniqueName: \"kubernetes.io/projected/1f3aab1c-726d-4027-b629-e04916bc4f8b-kube-api-access-vcvf9\") pod \"1f3aab1c-726d-4027-b629-e04916bc4f8b\" (UID: \"1f3aab1c-726d-4027-b629-e04916bc4f8b\") " Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.004456 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.004501 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f3aab1c-726d-4027-b629-e04916bc4f8b-serving-cert\") pod \"1f3aab1c-726d-4027-b629-e04916bc4f8b\" (UID: \"1f3aab1c-726d-4027-b629-e04916bc4f8b\") " Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.004535 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f3aab1c-726d-4027-b629-e04916bc4f8b-client-ca\") pod \"1f3aab1c-726d-4027-b629-e04916bc4f8b\" (UID: \"1f3aab1c-726d-4027-b629-e04916bc4f8b\") " Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.004560 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f3aab1c-726d-4027-b629-e04916bc4f8b-proxy-ca-bundles\") pod \"1f3aab1c-726d-4027-b629-e04916bc4f8b\" (UID: \"1f3aab1c-726d-4027-b629-e04916bc4f8b\") " Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.004588 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f3aab1c-726d-4027-b629-e04916bc4f8b-config\") pod \"1f3aab1c-726d-4027-b629-e04916bc4f8b\" (UID: \"1f3aab1c-726d-4027-b629-e04916bc4f8b\") " Jan 23 14:06:48 crc kubenswrapper[4775]: E0123 14:06:48.004649 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:48.504621109 +0000 UTC m=+155.499449849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.004790 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bb5169a-229e-4d38-beea-4783c11d0098-catalog-content\") pod \"community-operators-2q2jj\" (UID: \"8bb5169a-229e-4d38-beea-4783c11d0098\") " pod="openshift-marketplace/community-operators-2q2jj" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.004861 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bb5169a-229e-4d38-beea-4783c11d0098-utilities\") pod \"community-operators-2q2jj\" (UID: \"8bb5169a-229e-4d38-beea-4783c11d0098\") " pod="openshift-marketplace/community-operators-2q2jj" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.004919 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2lfm\" (UniqueName: \"kubernetes.io/projected/8bb5169a-229e-4d38-beea-4783c11d0098-kube-api-access-f2lfm\") pod \"community-operators-2q2jj\" (UID: \"8bb5169a-229e-4d38-beea-4783c11d0098\") " pod="openshift-marketplace/community-operators-2q2jj" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.004940 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.005158 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f3aab1c-726d-4027-b629-e04916bc4f8b-client-ca" (OuterVolumeSpecName: "client-ca") pod "1f3aab1c-726d-4027-b629-e04916bc4f8b" (UID: "1f3aab1c-726d-4027-b629-e04916bc4f8b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:06:48 crc kubenswrapper[4775]: E0123 14:06:48.005204 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:48.505193256 +0000 UTC m=+155.500021996 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.005254 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f3aab1c-726d-4027-b629-e04916bc4f8b-config" (OuterVolumeSpecName: "config") pod "1f3aab1c-726d-4027-b629-e04916bc4f8b" (UID: "1f3aab1c-726d-4027-b629-e04916bc4f8b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.005282 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1f3aab1c-726d-4027-b629-e04916bc4f8b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1f3aab1c-726d-4027-b629-e04916bc4f8b" (UID: "1f3aab1c-726d-4027-b629-e04916bc4f8b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.013109 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f3aab1c-726d-4027-b629-e04916bc4f8b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1f3aab1c-726d-4027-b629-e04916bc4f8b" (UID: "1f3aab1c-726d-4027-b629-e04916bc4f8b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.024252 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f3aab1c-726d-4027-b629-e04916bc4f8b-kube-api-access-vcvf9" (OuterVolumeSpecName: "kube-api-access-vcvf9") pod "1f3aab1c-726d-4027-b629-e04916bc4f8b" (UID: "1f3aab1c-726d-4027-b629-e04916bc4f8b"). InnerVolumeSpecName "kube-api-access-vcvf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.083907 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2q2jj"] Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.106554 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.106758 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bb5169a-229e-4d38-beea-4783c11d0098-utilities\") pod \"community-operators-2q2jj\" (UID: \"8bb5169a-229e-4d38-beea-4783c11d0098\") " pod="openshift-marketplace/community-operators-2q2jj" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.106840 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2lfm\" (UniqueName: \"kubernetes.io/projected/8bb5169a-229e-4d38-beea-4783c11d0098-kube-api-access-f2lfm\") pod \"community-operators-2q2jj\" (UID: \"8bb5169a-229e-4d38-beea-4783c11d0098\") " pod="openshift-marketplace/community-operators-2q2jj" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.107082 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bb5169a-229e-4d38-beea-4783c11d0098-catalog-content\") pod \"community-operators-2q2jj\" (UID: \"8bb5169a-229e-4d38-beea-4783c11d0098\") " pod="openshift-marketplace/community-operators-2q2jj" Jan 23 14:06:48 crc kubenswrapper[4775]: E0123 14:06:48.107128 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:48.60711474 +0000 UTC m=+155.601943470 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.107174 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f3aab1c-726d-4027-b629-e04916bc4f8b-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.107186 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcvf9\" (UniqueName: \"kubernetes.io/projected/1f3aab1c-726d-4027-b629-e04916bc4f8b-kube-api-access-vcvf9\") on node \"crc\" DevicePath \"\"" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.107195 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f3aab1c-726d-4027-b629-e04916bc4f8b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.107203 4775 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1f3aab1c-726d-4027-b629-e04916bc4f8b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.107211 4775 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1f3aab1c-726d-4027-b629-e04916bc4f8b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.107945 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bb5169a-229e-4d38-beea-4783c11d0098-catalog-content\") pod \"community-operators-2q2jj\" (UID: \"8bb5169a-229e-4d38-beea-4783c11d0098\") " pod="openshift-marketplace/community-operators-2q2jj" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.107973 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bb5169a-229e-4d38-beea-4783c11d0098-utilities\") pod \"community-operators-2q2jj\" (UID: \"8bb5169a-229e-4d38-beea-4783c11d0098\") " pod="openshift-marketplace/community-operators-2q2jj" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.139666 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-285dn"] Jan 23 14:06:48 crc kubenswrapper[4775]: E0123 14:06:48.139873 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f3aab1c-726d-4027-b629-e04916bc4f8b" containerName="controller-manager" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.139885 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f3aab1c-726d-4027-b629-e04916bc4f8b" containerName="controller-manager" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.139984 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f3aab1c-726d-4027-b629-e04916bc4f8b" containerName="controller-manager" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.140610 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-285dn" Jan 23 14:06:48 crc kubenswrapper[4775]: W0123 14:06:48.161755 4775 reflector.go:561] object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g": failed to list *v1.Secret: secrets "certified-operators-dockercfg-4rs5g" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Jan 23 14:06:48 crc kubenswrapper[4775]: E0123 14:06:48.161817 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-4rs5g\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"certified-operators-dockercfg-4rs5g\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.162647 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2lfm\" (UniqueName: \"kubernetes.io/projected/8bb5169a-229e-4d38-beea-4783c11d0098-kube-api-access-f2lfm\") pod \"community-operators-2q2jj\" (UID: \"8bb5169a-229e-4d38-beea-4783c11d0098\") " pod="openshift-marketplace/community-operators-2q2jj" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.167562 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-285dn"] Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.208823 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b219edd-2ebd-4968-b427-ec555eade68c-catalog-content\") pod \"certified-operators-285dn\" (UID: \"1b219edd-2ebd-4968-b427-ec555eade68c\") " pod="openshift-marketplace/certified-operators-285dn" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.208874 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnxtf\" (UniqueName: \"kubernetes.io/projected/1b219edd-2ebd-4968-b427-ec555eade68c-kube-api-access-vnxtf\") pod \"certified-operators-285dn\" (UID: \"1b219edd-2ebd-4968-b427-ec555eade68c\") " pod="openshift-marketplace/certified-operators-285dn" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.208922 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.208945 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b219edd-2ebd-4968-b427-ec555eade68c-utilities\") pod \"certified-operators-285dn\" (UID: \"1b219edd-2ebd-4968-b427-ec555eade68c\") " pod="openshift-marketplace/certified-operators-285dn" Jan 23 14:06:48 crc kubenswrapper[4775]: E0123 14:06:48.209254 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:48.70923888 +0000 UTC m=+155.704067620 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.268615 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2q2jj" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.309867 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:48 crc kubenswrapper[4775]: E0123 14:06:48.310078 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:48.81004935 +0000 UTC m=+155.804878090 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.310442 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b219edd-2ebd-4968-b427-ec555eade68c-catalog-content\") pod \"certified-operators-285dn\" (UID: \"1b219edd-2ebd-4968-b427-ec555eade68c\") " pod="openshift-marketplace/certified-operators-285dn" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.310492 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnxtf\" (UniqueName: \"kubernetes.io/projected/1b219edd-2ebd-4968-b427-ec555eade68c-kube-api-access-vnxtf\") pod \"certified-operators-285dn\" (UID: \"1b219edd-2ebd-4968-b427-ec555eade68c\") " pod="openshift-marketplace/certified-operators-285dn" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.310560 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.310598 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b219edd-2ebd-4968-b427-ec555eade68c-utilities\") pod \"certified-operators-285dn\" (UID: \"1b219edd-2ebd-4968-b427-ec555eade68c\") " pod="openshift-marketplace/certified-operators-285dn" Jan 23 14:06:48 crc kubenswrapper[4775]: E0123 14:06:48.311017 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:48.810997739 +0000 UTC m=+155.805826479 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.311043 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b219edd-2ebd-4968-b427-ec555eade68c-catalog-content\") pod \"certified-operators-285dn\" (UID: \"1b219edd-2ebd-4968-b427-ec555eade68c\") " pod="openshift-marketplace/certified-operators-285dn" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.311133 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b219edd-2ebd-4968-b427-ec555eade68c-utilities\") pod \"certified-operators-285dn\" (UID: \"1b219edd-2ebd-4968-b427-ec555eade68c\") " pod="openshift-marketplace/certified-operators-285dn" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.317056 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pphm8"] Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.317985 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pphm8" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.335930 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pphm8"] Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.340973 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnxtf\" (UniqueName: \"kubernetes.io/projected/1b219edd-2ebd-4968-b427-ec555eade68c-kube-api-access-vnxtf\") pod \"certified-operators-285dn\" (UID: \"1b219edd-2ebd-4968-b427-ec555eade68c\") " pod="openshift-marketplace/certified-operators-285dn" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.411624 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.411794 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a627ae2-fe8d-403e-9d14-3c3ace588da5-utilities\") pod \"community-operators-pphm8\" (UID: \"1a627ae2-fe8d-403e-9d14-3c3ace588da5\") " pod="openshift-marketplace/community-operators-pphm8" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.411831 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a627ae2-fe8d-403e-9d14-3c3ace588da5-catalog-content\") pod \"community-operators-pphm8\" (UID: \"1a627ae2-fe8d-403e-9d14-3c3ace588da5\") " pod="openshift-marketplace/community-operators-pphm8" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.411879 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhknv\" (UniqueName: \"kubernetes.io/projected/1a627ae2-fe8d-403e-9d14-3c3ace588da5-kube-api-access-dhknv\") pod \"community-operators-pphm8\" (UID: \"1a627ae2-fe8d-403e-9d14-3c3ace588da5\") " pod="openshift-marketplace/community-operators-pphm8" Jan 23 14:06:48 crc kubenswrapper[4775]: E0123 14:06:48.411985 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:48.911969784 +0000 UTC m=+155.906798524 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.513551 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a627ae2-fe8d-403e-9d14-3c3ace588da5-utilities\") pod \"community-operators-pphm8\" (UID: \"1a627ae2-fe8d-403e-9d14-3c3ace588da5\") " pod="openshift-marketplace/community-operators-pphm8" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.513585 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a627ae2-fe8d-403e-9d14-3c3ace588da5-catalog-content\") pod \"community-operators-pphm8\" (UID: \"1a627ae2-fe8d-403e-9d14-3c3ace588da5\") " pod="openshift-marketplace/community-operators-pphm8" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.513633 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhknv\" (UniqueName: \"kubernetes.io/projected/1a627ae2-fe8d-403e-9d14-3c3ace588da5-kube-api-access-dhknv\") pod \"community-operators-pphm8\" (UID: \"1a627ae2-fe8d-403e-9d14-3c3ace588da5\") " pod="openshift-marketplace/community-operators-pphm8" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.513661 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:48 crc kubenswrapper[4775]: E0123 14:06:48.514017 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:49.014000331 +0000 UTC m=+156.008829071 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.514667 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a627ae2-fe8d-403e-9d14-3c3ace588da5-utilities\") pod \"community-operators-pphm8\" (UID: \"1a627ae2-fe8d-403e-9d14-3c3ace588da5\") " pod="openshift-marketplace/community-operators-pphm8" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.514869 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a627ae2-fe8d-403e-9d14-3c3ace588da5-catalog-content\") pod \"community-operators-pphm8\" (UID: \"1a627ae2-fe8d-403e-9d14-3c3ace588da5\") " pod="openshift-marketplace/community-operators-pphm8" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.522778 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hdhzj"] Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.524058 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hdhzj" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.541758 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hdhzj"] Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.541795 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhknv\" (UniqueName: \"kubernetes.io/projected/1a627ae2-fe8d-403e-9d14-3c3ace588da5-kube-api-access-dhknv\") pod \"community-operators-pphm8\" (UID: \"1a627ae2-fe8d-403e-9d14-3c3ace588da5\") " pod="openshift-marketplace/community-operators-pphm8" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.546971 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.547990 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-v2bx4" event={"ID":"1f3aab1c-726d-4027-b629-e04916bc4f8b","Type":"ContainerDied","Data":"c804f2807463870f94ca39d16cb9e5b2566a2fdc9148b1292a1636387b79edff"} Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.548047 4775 scope.go:117] "RemoveContainer" containerID="1976824d0d7581f25778cade1ceabbaefa46516e629ce58d32cb2d84aec22a6a" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.587883 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" event={"ID":"aaac7553-88f9-49bd-811f-e993ad0cd40d","Type":"ContainerStarted","Data":"df433355476c1e3453b026f3a6326a60187145c8f5ca08e20e52c73b1cefe1da"} Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.618422 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-c9x8w" podStartSLOduration=10.618403838999999 podStartE2EDuration="10.618403839s" podCreationTimestamp="2026-01-23 14:06:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:48.614718309 +0000 UTC m=+155.609547059" watchObservedRunningTime="2026-01-23 14:06:48.618403839 +0000 UTC m=+155.613232579" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.622376 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.622562 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/945aeb53-25e2-4666-8fbe-a12be2948454-catalog-content\") pod \"certified-operators-hdhzj\" (UID: \"945aeb53-25e2-4666-8fbe-a12be2948454\") " pod="openshift-marketplace/certified-operators-hdhzj" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.622642 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/945aeb53-25e2-4666-8fbe-a12be2948454-utilities\") pod \"certified-operators-hdhzj\" (UID: \"945aeb53-25e2-4666-8fbe-a12be2948454\") " pod="openshift-marketplace/certified-operators-hdhzj" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.622693 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4wm7\" (UniqueName: \"kubernetes.io/projected/945aeb53-25e2-4666-8fbe-a12be2948454-kube-api-access-w4wm7\") pod \"certified-operators-hdhzj\" (UID: \"945aeb53-25e2-4666-8fbe-a12be2948454\") " pod="openshift-marketplace/certified-operators-hdhzj" Jan 23 14:06:48 crc kubenswrapper[4775]: E0123 14:06:48.622795 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:49.12278049 +0000 UTC m=+156.117609230 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.634877 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-v2bx4"] Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.635163 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pphm8" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.654191 4775 patch_prober.go:28] interesting pod/router-default-5444994796-nj2dd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 14:06:48 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Jan 23 14:06:48 crc kubenswrapper[4775]: [+]process-running ok Jan 23 14:06:48 crc kubenswrapper[4775]: healthz check failed Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.654528 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nj2dd" podUID="381c20f8-ed2d-4aa8-b99b-5d85a6eb5526" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.668009 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-v2bx4"] Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.671450 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2q2jj"] Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.725508 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.725568 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4wm7\" (UniqueName: \"kubernetes.io/projected/945aeb53-25e2-4666-8fbe-a12be2948454-kube-api-access-w4wm7\") pod \"certified-operators-hdhzj\" (UID: \"945aeb53-25e2-4666-8fbe-a12be2948454\") " pod="openshift-marketplace/certified-operators-hdhzj" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.725599 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/945aeb53-25e2-4666-8fbe-a12be2948454-catalog-content\") pod \"certified-operators-hdhzj\" (UID: \"945aeb53-25e2-4666-8fbe-a12be2948454\") " pod="openshift-marketplace/certified-operators-hdhzj" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.725701 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/945aeb53-25e2-4666-8fbe-a12be2948454-utilities\") pod \"certified-operators-hdhzj\" (UID: \"945aeb53-25e2-4666-8fbe-a12be2948454\") " pod="openshift-marketplace/certified-operators-hdhzj" Jan 23 14:06:48 crc kubenswrapper[4775]: E0123 14:06:48.727694 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:49.227680752 +0000 UTC m=+156.222509492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.728100 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/945aeb53-25e2-4666-8fbe-a12be2948454-utilities\") pod \"certified-operators-hdhzj\" (UID: \"945aeb53-25e2-4666-8fbe-a12be2948454\") " pod="openshift-marketplace/certified-operators-hdhzj" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.728571 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/945aeb53-25e2-4666-8fbe-a12be2948454-catalog-content\") pod \"certified-operators-hdhzj\" (UID: \"945aeb53-25e2-4666-8fbe-a12be2948454\") " pod="openshift-marketplace/certified-operators-hdhzj" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.751846 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4wm7\" (UniqueName: \"kubernetes.io/projected/945aeb53-25e2-4666-8fbe-a12be2948454-kube-api-access-w4wm7\") pod \"certified-operators-hdhzj\" (UID: \"945aeb53-25e2-4666-8fbe-a12be2948454\") " pod="openshift-marketplace/certified-operators-hdhzj" Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.826485 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:48 crc kubenswrapper[4775]: E0123 14:06:48.826757 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:49.32672796 +0000 UTC m=+156.321556780 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.882124 4775 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-23T14:06:47.907979993Z","Handler":null,"Name":""} Jan 23 14:06:48 crc kubenswrapper[4775]: I0123 14:06:48.927867 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:48 crc kubenswrapper[4775]: E0123 14:06:48.928185 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:49.4281729 +0000 UTC m=+156.423001640 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.028693 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:49 crc kubenswrapper[4775]: E0123 14:06:49.029119 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:49.529104134 +0000 UTC m=+156.523932874 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.047926 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pphm8"] Jan 23 14:06:49 crc kubenswrapper[4775]: W0123 14:06:49.052685 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a627ae2_fe8d_403e_9d14_3c3ace588da5.slice/crio-0b453500d83d6bbbd03aaa519b618891a6bceb9a87ed025821643578d93cd618 WatchSource:0}: Error finding container 0b453500d83d6bbbd03aaa519b618891a6bceb9a87ed025821643578d93cd618: Status 404 returned error can't find the container with id 0b453500d83d6bbbd03aaa519b618891a6bceb9a87ed025821643578d93cd618 Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.130794 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:49 crc kubenswrapper[4775]: E0123 14:06:49.131125 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:49.63109907 +0000 UTC m=+156.625927820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.232300 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:49 crc kubenswrapper[4775]: E0123 14:06:49.232577 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-23 14:06:49.732549559 +0000 UTC m=+156.727378329 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.333749 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:49 crc kubenswrapper[4775]: E0123 14:06:49.334138 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-23 14:06:49.834122273 +0000 UTC m=+156.828951013 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-xpwjl" (UID: "85b405af-7314-4e53-93a5-252b69153561") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.367227 4775 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.367296 4775 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.435236 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.444379 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.470301 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.471948 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.474646 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.476237 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.482337 4775 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-marketplace/certified-operators-285dn" secret="" err="failed to sync secret cache: timed out waiting for the condition" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.482426 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-285dn" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.487190 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.536714 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.536885 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4927c747-c679-46bf-bcc6-485f87f885ab-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4927c747-c679-46bf-bcc6-485f87f885ab\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.536921 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4927c747-c679-46bf-bcc6-485f87f885ab-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4927c747-c679-46bf-bcc6-485f87f885ab\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.538140 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.540471 4775 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.540511 4775 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.547704 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hdhzj" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.609286 4775 generic.go:334] "Generic (PLEG): container finished" podID="1a627ae2-fe8d-403e-9d14-3c3ace588da5" containerID="cbed6950aa3965cd8bfc7aa378027bf0a2d1e04ccbea9bb4f1e5636ae166f729" exitCode=0 Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.609393 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pphm8" event={"ID":"1a627ae2-fe8d-403e-9d14-3c3ace588da5","Type":"ContainerDied","Data":"cbed6950aa3965cd8bfc7aa378027bf0a2d1e04ccbea9bb4f1e5636ae166f729"} Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.609421 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pphm8" event={"ID":"1a627ae2-fe8d-403e-9d14-3c3ace588da5","Type":"ContainerStarted","Data":"0b453500d83d6bbbd03aaa519b618891a6bceb9a87ed025821643578d93cd618"} Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.616187 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-xpwjl\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.620304 4775 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.632703 4775 generic.go:334] "Generic (PLEG): container finished" podID="8bb5169a-229e-4d38-beea-4783c11d0098" containerID="c0baa5a93e54c6225c779b90a89902f01c5bdd44c7fddb995bab3ef18e6ecb5f" exitCode=0 Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.632773 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2q2jj" event={"ID":"8bb5169a-229e-4d38-beea-4783c11d0098","Type":"ContainerDied","Data":"c0baa5a93e54c6225c779b90a89902f01c5bdd44c7fddb995bab3ef18e6ecb5f"} Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.632814 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2q2jj" event={"ID":"8bb5169a-229e-4d38-beea-4783c11d0098","Type":"ContainerStarted","Data":"3666244710ce45438b030ced5df57918d02f4be6ca49d93c06949ae50a2a548e"} Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.634458 4775 generic.go:334] "Generic (PLEG): container finished" podID="2d6b6f17-bb56-49ba-8487-6e07346780a1" containerID="bd180f88acb55bc6174b54cab0740792964b942d82c9bf0cffd2ac1751bececd" exitCode=0 Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.634602 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b" event={"ID":"2d6b6f17-bb56-49ba-8487-6e07346780a1","Type":"ContainerDied","Data":"bd180f88acb55bc6174b54cab0740792964b942d82c9bf0cffd2ac1751bececd"} Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.637498 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4927c747-c679-46bf-bcc6-485f87f885ab-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4927c747-c679-46bf-bcc6-485f87f885ab\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.637539 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4927c747-c679-46bf-bcc6-485f87f885ab-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4927c747-c679-46bf-bcc6-485f87f885ab\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.637901 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4927c747-c679-46bf-bcc6-485f87f885ab-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4927c747-c679-46bf-bcc6-485f87f885ab\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.652269 4775 patch_prober.go:28] interesting pod/router-default-5444994796-nj2dd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 14:06:49 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Jan 23 14:06:49 crc kubenswrapper[4775]: [+]process-running ok Jan 23 14:06:49 crc kubenswrapper[4775]: healthz check failed Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.652335 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nj2dd" podUID="381c20f8-ed2d-4aa8-b99b-5d85a6eb5526" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.684663 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4927c747-c679-46bf-bcc6-485f87f885ab-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4927c747-c679-46bf-bcc6-485f87f885ab\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.734924 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f3aab1c-726d-4027-b629-e04916bc4f8b" path="/var/lib/kubelet/pods/1f3aab1c-726d-4027-b629-e04916bc4f8b/volumes" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.735684 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 23 14:06:49 crc kubenswrapper[4775]: W0123 14:06:49.735675 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b219edd_2ebd_4968_b427_ec555eade68c.slice/crio-9a6cbd2e89e6d00653f0a6c222530e1e89b3f96e06271f5d87d7fff651ac3937 WatchSource:0}: Error finding container 9a6cbd2e89e6d00653f0a6c222530e1e89b3f96e06271f5d87d7fff651ac3937: Status 404 returned error can't find the container with id 9a6cbd2e89e6d00653f0a6c222530e1e89b3f96e06271f5d87d7fff651ac3937 Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.736107 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-285dn"] Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.776997 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.782646 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hdhzj"] Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.797486 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fp8bb"] Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.799575 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.805637 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.806000 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.806241 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.806259 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.807129 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.807603 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.808249 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fp8bb"] Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.815984 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.839111 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-config\") pod \"controller-manager-879f6c89f-fp8bb\" (UID: \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.839173 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-client-ca\") pod \"controller-manager-879f6c89f-fp8bb\" (UID: \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.839210 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccjkp\" (UniqueName: \"kubernetes.io/projected/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-kube-api-access-ccjkp\") pod \"controller-manager-879f6c89f-fp8bb\" (UID: \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.839243 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-serving-cert\") pod \"controller-manager-879f6c89f-fp8bb\" (UID: \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.839280 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fp8bb\" (UID: \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.910556 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.940979 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccjkp\" (UniqueName: \"kubernetes.io/projected/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-kube-api-access-ccjkp\") pod \"controller-manager-879f6c89f-fp8bb\" (UID: \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.941021 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-serving-cert\") pod \"controller-manager-879f6c89f-fp8bb\" (UID: \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.941050 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fp8bb\" (UID: \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.941109 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-config\") pod \"controller-manager-879f6c89f-fp8bb\" (UID: \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.941134 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-client-ca\") pod \"controller-manager-879f6c89f-fp8bb\" (UID: \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.942290 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-client-ca\") pod \"controller-manager-879f6c89f-fp8bb\" (UID: \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.944471 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-fp8bb\" (UID: \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.945331 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-config\") pod \"controller-manager-879f6c89f-fp8bb\" (UID: \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.949492 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-serving-cert\") pod \"controller-manager-879f6c89f-fp8bb\" (UID: \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.959789 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccjkp\" (UniqueName: \"kubernetes.io/projected/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-kube-api-access-ccjkp\") pod \"controller-manager-879f6c89f-fp8bb\" (UID: \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\") " pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" Jan 23 14:06:49 crc kubenswrapper[4775]: I0123 14:06:49.986962 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xpwjl"] Jan 23 14:06:50 crc kubenswrapper[4775]: W0123 14:06:50.024813 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85b405af_7314_4e53_93a5_252b69153561.slice/crio-b50d7a209d2fcc5cb17e88e539bff4914e9d70de68aa4c3a0de07ad93e7848e4 WatchSource:0}: Error finding container b50d7a209d2fcc5cb17e88e539bff4914e9d70de68aa4c3a0de07ad93e7848e4: Status 404 returned error can't find the container with id b50d7a209d2fcc5cb17e88e539bff4914e9d70de68aa4c3a0de07ad93e7848e4 Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.096371 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 23 14:06:50 crc kubenswrapper[4775]: W0123 14:06:50.113454 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4927c747_c679_46bf_bcc6_485f87f885ab.slice/crio-c5e2cf7dc94bca19d391d27aa9b768b85ccfa71fad8a84b4ced6560f9dc08f72 WatchSource:0}: Error finding container c5e2cf7dc94bca19d391d27aa9b768b85ccfa71fad8a84b4ced6560f9dc08f72: Status 404 returned error can't find the container with id c5e2cf7dc94bca19d391d27aa9b768b85ccfa71fad8a84b4ced6560f9dc08f72 Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.116310 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q6l68"] Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.117487 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q6l68" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.119464 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.124692 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q6l68"] Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.135959 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.245329 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phm66\" (UniqueName: \"kubernetes.io/projected/e59d5724-424f-4151-98a4-c2cfa3918ac0-kube-api-access-phm66\") pod \"redhat-marketplace-q6l68\" (UID: \"e59d5724-424f-4151-98a4-c2cfa3918ac0\") " pod="openshift-marketplace/redhat-marketplace-q6l68" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.245758 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e59d5724-424f-4151-98a4-c2cfa3918ac0-utilities\") pod \"redhat-marketplace-q6l68\" (UID: \"e59d5724-424f-4151-98a4-c2cfa3918ac0\") " pod="openshift-marketplace/redhat-marketplace-q6l68" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.245785 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e59d5724-424f-4151-98a4-c2cfa3918ac0-catalog-content\") pod \"redhat-marketplace-q6l68\" (UID: \"e59d5724-424f-4151-98a4-c2cfa3918ac0\") " pod="openshift-marketplace/redhat-marketplace-q6l68" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.333951 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fp8bb"] Jan 23 14:06:50 crc kubenswrapper[4775]: W0123 14:06:50.335334 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4a8470cc_442d_4efc_91a2_af7e4fe75b3a.slice/crio-971aa15dc628c22efdf895129c598854abbaff49521d3e188678eecd5ae7782c WatchSource:0}: Error finding container 971aa15dc628c22efdf895129c598854abbaff49521d3e188678eecd5ae7782c: Status 404 returned error can't find the container with id 971aa15dc628c22efdf895129c598854abbaff49521d3e188678eecd5ae7782c Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.349512 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phm66\" (UniqueName: \"kubernetes.io/projected/e59d5724-424f-4151-98a4-c2cfa3918ac0-kube-api-access-phm66\") pod \"redhat-marketplace-q6l68\" (UID: \"e59d5724-424f-4151-98a4-c2cfa3918ac0\") " pod="openshift-marketplace/redhat-marketplace-q6l68" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.349563 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e59d5724-424f-4151-98a4-c2cfa3918ac0-utilities\") pod \"redhat-marketplace-q6l68\" (UID: \"e59d5724-424f-4151-98a4-c2cfa3918ac0\") " pod="openshift-marketplace/redhat-marketplace-q6l68" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.349584 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e59d5724-424f-4151-98a4-c2cfa3918ac0-catalog-content\") pod \"redhat-marketplace-q6l68\" (UID: \"e59d5724-424f-4151-98a4-c2cfa3918ac0\") " pod="openshift-marketplace/redhat-marketplace-q6l68" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.350382 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e59d5724-424f-4151-98a4-c2cfa3918ac0-catalog-content\") pod \"redhat-marketplace-q6l68\" (UID: \"e59d5724-424f-4151-98a4-c2cfa3918ac0\") " pod="openshift-marketplace/redhat-marketplace-q6l68" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.350443 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e59d5724-424f-4151-98a4-c2cfa3918ac0-utilities\") pod \"redhat-marketplace-q6l68\" (UID: \"e59d5724-424f-4151-98a4-c2cfa3918ac0\") " pod="openshift-marketplace/redhat-marketplace-q6l68" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.367573 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phm66\" (UniqueName: \"kubernetes.io/projected/e59d5724-424f-4151-98a4-c2cfa3918ac0-kube-api-access-phm66\") pod \"redhat-marketplace-q6l68\" (UID: \"e59d5724-424f-4151-98a4-c2cfa3918ac0\") " pod="openshift-marketplace/redhat-marketplace-q6l68" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.450207 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q6l68" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.520703 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-998gd"] Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.522382 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-998gd" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.534222 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-998gd"] Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.553369 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.554559 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.565730 4775 patch_prober.go:28] interesting pod/apiserver-76f77b778f-mc4h4 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 23 14:06:50 crc kubenswrapper[4775]: [+]log ok Jan 23 14:06:50 crc kubenswrapper[4775]: [+]etcd ok Jan 23 14:06:50 crc kubenswrapper[4775]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 23 14:06:50 crc kubenswrapper[4775]: [+]poststarthook/generic-apiserver-start-informers ok Jan 23 14:06:50 crc kubenswrapper[4775]: [+]poststarthook/max-in-flight-filter ok Jan 23 14:06:50 crc kubenswrapper[4775]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 23 14:06:50 crc kubenswrapper[4775]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 23 14:06:50 crc kubenswrapper[4775]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 23 14:06:50 crc kubenswrapper[4775]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Jan 23 14:06:50 crc kubenswrapper[4775]: [+]poststarthook/project.openshift.io-projectcache ok Jan 23 14:06:50 crc kubenswrapper[4775]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 23 14:06:50 crc kubenswrapper[4775]: [+]poststarthook/openshift.io-startinformers ok Jan 23 14:06:50 crc kubenswrapper[4775]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 23 14:06:50 crc kubenswrapper[4775]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 23 14:06:50 crc kubenswrapper[4775]: livez check failed Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.565777 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" podUID="f9750de6-fc79-440e-8ad4-07acbe4edb49" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.605565 4775 patch_prober.go:28] interesting pod/downloads-7954f5f757-mvqcg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.605615 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-mvqcg" podUID="8ba1b8ce-8332-45c9-bfb0-9a1842dea009" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.605748 4775 patch_prober.go:28] interesting pod/downloads-7954f5f757-mvqcg container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.605790 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-mvqcg" podUID="8ba1b8ce-8332-45c9-bfb0-9a1842dea009" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.607486 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-4dpv6" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.652706 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a25e2625-85e2-4f61-a654-347c5d111fc2-catalog-content\") pod \"redhat-marketplace-998gd\" (UID: \"a25e2625-85e2-4f61-a654-347c5d111fc2\") " pod="openshift-marketplace/redhat-marketplace-998gd" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.652759 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a25e2625-85e2-4f61-a654-347c5d111fc2-utilities\") pod \"redhat-marketplace-998gd\" (UID: \"a25e2625-85e2-4f61-a654-347c5d111fc2\") " pod="openshift-marketplace/redhat-marketplace-998gd" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.652873 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mds26\" (UniqueName: \"kubernetes.io/projected/a25e2625-85e2-4f61-a654-347c5d111fc2-kube-api-access-mds26\") pod \"redhat-marketplace-998gd\" (UID: \"a25e2625-85e2-4f61-a654-347c5d111fc2\") " pod="openshift-marketplace/redhat-marketplace-998gd" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.654746 4775 patch_prober.go:28] interesting pod/router-default-5444994796-nj2dd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 14:06:50 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Jan 23 14:06:50 crc kubenswrapper[4775]: [+]process-running ok Jan 23 14:06:50 crc kubenswrapper[4775]: healthz check failed Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.654861 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nj2dd" podUID="381c20f8-ed2d-4aa8-b99b-5d85a6eb5526" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.669227 4775 generic.go:334] "Generic (PLEG): container finished" podID="945aeb53-25e2-4666-8fbe-a12be2948454" containerID="6872f50c5369e996aaf9998a59794f18e488c47ef49db5d73fa140ee26fe751a" exitCode=0 Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.669331 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hdhzj" event={"ID":"945aeb53-25e2-4666-8fbe-a12be2948454","Type":"ContainerDied","Data":"6872f50c5369e996aaf9998a59794f18e488c47ef49db5d73fa140ee26fe751a"} Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.669406 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hdhzj" event={"ID":"945aeb53-25e2-4666-8fbe-a12be2948454","Type":"ContainerStarted","Data":"0b8e8f2a3112c9f0a5edf42bad4d4c0988004cce6f56bf24b39ad208c83c6912"} Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.680628 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4927c747-c679-46bf-bcc6-485f87f885ab","Type":"ContainerStarted","Data":"314e3c9c844a6677c18f60414390ec85b7864dca6d7ccf08978dd36224f72f04"} Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.680667 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4927c747-c679-46bf-bcc6-485f87f885ab","Type":"ContainerStarted","Data":"c5e2cf7dc94bca19d391d27aa9b768b85ccfa71fad8a84b4ced6560f9dc08f72"} Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.682576 4775 generic.go:334] "Generic (PLEG): container finished" podID="1b219edd-2ebd-4968-b427-ec555eade68c" containerID="1dfa5709162617f477770a0c1b0ee689961a84471dd689b9f7007baa498421fb" exitCode=0 Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.682623 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-285dn" event={"ID":"1b219edd-2ebd-4968-b427-ec555eade68c","Type":"ContainerDied","Data":"1dfa5709162617f477770a0c1b0ee689961a84471dd689b9f7007baa498421fb"} Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.682638 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-285dn" event={"ID":"1b219edd-2ebd-4968-b427-ec555eade68c","Type":"ContainerStarted","Data":"9a6cbd2e89e6d00653f0a6c222530e1e89b3f96e06271f5d87d7fff651ac3937"} Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.687481 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" event={"ID":"4a8470cc-442d-4efc-91a2-af7e4fe75b3a","Type":"ContainerStarted","Data":"f4b1eb7532640c0119fea3d1dd873eab326ab51390a8e59dcd343707c94098b9"} Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.687508 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" event={"ID":"4a8470cc-442d-4efc-91a2-af7e4fe75b3a","Type":"ContainerStarted","Data":"971aa15dc628c22efdf895129c598854abbaff49521d3e188678eecd5ae7782c"} Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.688453 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.692138 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" event={"ID":"85b405af-7314-4e53-93a5-252b69153561","Type":"ContainerStarted","Data":"4284b5552eca9842bbe2aed75c1f5823dcb142543281afc7abbca3b100b2fc8e"} Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.692192 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" event={"ID":"85b405af-7314-4e53-93a5-252b69153561","Type":"ContainerStarted","Data":"b50d7a209d2fcc5cb17e88e539bff4914e9d70de68aa4c3a0de07ad93e7848e4"} Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.692302 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.706346 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=1.706323931 podStartE2EDuration="1.706323931s" podCreationTimestamp="2026-01-23 14:06:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:50.704023422 +0000 UTC m=+157.698852162" watchObservedRunningTime="2026-01-23 14:06:50.706323931 +0000 UTC m=+157.701152671" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.730140 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.742651 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q6l68"] Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.753731 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a25e2625-85e2-4f61-a654-347c5d111fc2-utilities\") pod \"redhat-marketplace-998gd\" (UID: \"a25e2625-85e2-4f61-a654-347c5d111fc2\") " pod="openshift-marketplace/redhat-marketplace-998gd" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.753842 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mds26\" (UniqueName: \"kubernetes.io/projected/a25e2625-85e2-4f61-a654-347c5d111fc2-kube-api-access-mds26\") pod \"redhat-marketplace-998gd\" (UID: \"a25e2625-85e2-4f61-a654-347c5d111fc2\") " pod="openshift-marketplace/redhat-marketplace-998gd" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.754251 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a25e2625-85e2-4f61-a654-347c5d111fc2-catalog-content\") pod \"redhat-marketplace-998gd\" (UID: \"a25e2625-85e2-4f61-a654-347c5d111fc2\") " pod="openshift-marketplace/redhat-marketplace-998gd" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.755955 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a25e2625-85e2-4f61-a654-347c5d111fc2-catalog-content\") pod \"redhat-marketplace-998gd\" (UID: \"a25e2625-85e2-4f61-a654-347c5d111fc2\") " pod="openshift-marketplace/redhat-marketplace-998gd" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.755962 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a25e2625-85e2-4f61-a654-347c5d111fc2-utilities\") pod \"redhat-marketplace-998gd\" (UID: \"a25e2625-85e2-4f61-a654-347c5d111fc2\") " pod="openshift-marketplace/redhat-marketplace-998gd" Jan 23 14:06:50 crc kubenswrapper[4775]: W0123 14:06:50.773827 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode59d5724_424f_4151_98a4_c2cfa3918ac0.slice/crio-26c35738c37491d0603ee348b5fe634ea59da9d48f5e4b15355f05e6dc983614 WatchSource:0}: Error finding container 26c35738c37491d0603ee348b5fe634ea59da9d48f5e4b15355f05e6dc983614: Status 404 returned error can't find the container with id 26c35738c37491d0603ee348b5fe634ea59da9d48f5e4b15355f05e6dc983614 Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.778424 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" podStartSLOduration=3.778407073 podStartE2EDuration="3.778407073s" podCreationTimestamp="2026-01-23 14:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:50.774669442 +0000 UTC m=+157.769498182" watchObservedRunningTime="2026-01-23 14:06:50.778407073 +0000 UTC m=+157.773235813" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.779253 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mds26\" (UniqueName: \"kubernetes.io/projected/a25e2625-85e2-4f61-a654-347c5d111fc2-kube-api-access-mds26\") pod \"redhat-marketplace-998gd\" (UID: \"a25e2625-85e2-4f61-a654-347c5d111fc2\") " pod="openshift-marketplace/redhat-marketplace-998gd" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.845100 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-998gd" Jan 23 14:06:50 crc kubenswrapper[4775]: I0123 14:06:50.852970 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" podStartSLOduration=134.852947539 podStartE2EDuration="2m14.852947539s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:50.847510887 +0000 UTC m=+157.842339627" watchObservedRunningTime="2026-01-23 14:06:50.852947539 +0000 UTC m=+157.847776279" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.135895 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-84gx7"] Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.138522 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-84gx7" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.142360 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.174150 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-84gx7"] Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.197266 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.197303 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.229130 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-998gd"] Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.238292 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.240579 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.273659 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e3253a9-fac0-401c-8e02-52758dbc40f3-catalog-content\") pod \"redhat-operators-84gx7\" (UID: \"0e3253a9-fac0-401c-8e02-52758dbc40f3\") " pod="openshift-marketplace/redhat-operators-84gx7" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.273704 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e3253a9-fac0-401c-8e02-52758dbc40f3-utilities\") pod \"redhat-operators-84gx7\" (UID: \"0e3253a9-fac0-401c-8e02-52758dbc40f3\") " pod="openshift-marketplace/redhat-operators-84gx7" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.273741 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2k5h\" (UniqueName: \"kubernetes.io/projected/0e3253a9-fac0-401c-8e02-52758dbc40f3-kube-api-access-h2k5h\") pod \"redhat-operators-84gx7\" (UID: \"0e3253a9-fac0-401c-8e02-52758dbc40f3\") " pod="openshift-marketplace/redhat-operators-84gx7" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.376279 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d6b6f17-bb56-49ba-8487-6e07346780a1-secret-volume\") pod \"2d6b6f17-bb56-49ba-8487-6e07346780a1\" (UID: \"2d6b6f17-bb56-49ba-8487-6e07346780a1\") " Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.376334 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d6b6f17-bb56-49ba-8487-6e07346780a1-config-volume\") pod \"2d6b6f17-bb56-49ba-8487-6e07346780a1\" (UID: \"2d6b6f17-bb56-49ba-8487-6e07346780a1\") " Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.376397 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n99rp\" (UniqueName: \"kubernetes.io/projected/2d6b6f17-bb56-49ba-8487-6e07346780a1-kube-api-access-n99rp\") pod \"2d6b6f17-bb56-49ba-8487-6e07346780a1\" (UID: \"2d6b6f17-bb56-49ba-8487-6e07346780a1\") " Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.376559 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e3253a9-fac0-401c-8e02-52758dbc40f3-catalog-content\") pod \"redhat-operators-84gx7\" (UID: \"0e3253a9-fac0-401c-8e02-52758dbc40f3\") " pod="openshift-marketplace/redhat-operators-84gx7" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.376584 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e3253a9-fac0-401c-8e02-52758dbc40f3-utilities\") pod \"redhat-operators-84gx7\" (UID: \"0e3253a9-fac0-401c-8e02-52758dbc40f3\") " pod="openshift-marketplace/redhat-operators-84gx7" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.376617 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2k5h\" (UniqueName: \"kubernetes.io/projected/0e3253a9-fac0-401c-8e02-52758dbc40f3-kube-api-access-h2k5h\") pod \"redhat-operators-84gx7\" (UID: \"0e3253a9-fac0-401c-8e02-52758dbc40f3\") " pod="openshift-marketplace/redhat-operators-84gx7" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.379395 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d6b6f17-bb56-49ba-8487-6e07346780a1-config-volume" (OuterVolumeSpecName: "config-volume") pod "2d6b6f17-bb56-49ba-8487-6e07346780a1" (UID: "2d6b6f17-bb56-49ba-8487-6e07346780a1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.379752 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e3253a9-fac0-401c-8e02-52758dbc40f3-catalog-content\") pod \"redhat-operators-84gx7\" (UID: \"0e3253a9-fac0-401c-8e02-52758dbc40f3\") " pod="openshift-marketplace/redhat-operators-84gx7" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.379982 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e3253a9-fac0-401c-8e02-52758dbc40f3-utilities\") pod \"redhat-operators-84gx7\" (UID: \"0e3253a9-fac0-401c-8e02-52758dbc40f3\") " pod="openshift-marketplace/redhat-operators-84gx7" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.391372 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d6b6f17-bb56-49ba-8487-6e07346780a1-kube-api-access-n99rp" (OuterVolumeSpecName: "kube-api-access-n99rp") pod "2d6b6f17-bb56-49ba-8487-6e07346780a1" (UID: "2d6b6f17-bb56-49ba-8487-6e07346780a1"). InnerVolumeSpecName "kube-api-access-n99rp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.391677 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d6b6f17-bb56-49ba-8487-6e07346780a1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2d6b6f17-bb56-49ba-8487-6e07346780a1" (UID: "2d6b6f17-bb56-49ba-8487-6e07346780a1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.399570 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2k5h\" (UniqueName: \"kubernetes.io/projected/0e3253a9-fac0-401c-8e02-52758dbc40f3-kube-api-access-h2k5h\") pod \"redhat-operators-84gx7\" (UID: \"0e3253a9-fac0-401c-8e02-52758dbc40f3\") " pod="openshift-marketplace/redhat-operators-84gx7" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.480645 4775 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2d6b6f17-bb56-49ba-8487-6e07346780a1-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.480689 4775 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2d6b6f17-bb56-49ba-8487-6e07346780a1-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.480703 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n99rp\" (UniqueName: \"kubernetes.io/projected/2d6b6f17-bb56-49ba-8487-6e07346780a1-kube-api-access-n99rp\") on node \"crc\" DevicePath \"\"" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.503139 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-84gx7" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.559199 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-stflq"] Jan 23 14:06:51 crc kubenswrapper[4775]: E0123 14:06:51.559496 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d6b6f17-bb56-49ba-8487-6e07346780a1" containerName="collect-profiles" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.559513 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d6b6f17-bb56-49ba-8487-6e07346780a1" containerName="collect-profiles" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.559667 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d6b6f17-bb56-49ba-8487-6e07346780a1" containerName="collect-profiles" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.560524 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-stflq" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.567637 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-stflq"] Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.596865 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.596935 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.605849 4775 patch_prober.go:28] interesting pod/console-f9d7485db-fgb82 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.605907 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-fgb82" podUID="a6821f92-2d15-4dc0-92ed-7a30cef98db9" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.651947 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-nj2dd" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.667968 4775 patch_prober.go:28] interesting pod/router-default-5444994796-nj2dd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 14:06:51 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Jan 23 14:06:51 crc kubenswrapper[4775]: [+]process-running ok Jan 23 14:06:51 crc kubenswrapper[4775]: healthz check failed Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.668031 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nj2dd" podUID="381c20f8-ed2d-4aa8-b99b-5d85a6eb5526" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.683826 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f29362d-380a-46e7-b163-0ff42600d563-catalog-content\") pod \"redhat-operators-stflq\" (UID: \"9f29362d-380a-46e7-b163-0ff42600d563\") " pod="openshift-marketplace/redhat-operators-stflq" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.683870 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj6gc\" (UniqueName: \"kubernetes.io/projected/9f29362d-380a-46e7-b163-0ff42600d563-kube-api-access-nj6gc\") pod \"redhat-operators-stflq\" (UID: \"9f29362d-380a-46e7-b163-0ff42600d563\") " pod="openshift-marketplace/redhat-operators-stflq" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.684021 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f29362d-380a-46e7-b163-0ff42600d563-utilities\") pod \"redhat-operators-stflq\" (UID: \"9f29362d-380a-46e7-b163-0ff42600d563\") " pod="openshift-marketplace/redhat-operators-stflq" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.731839 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.774473 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b" event={"ID":"2d6b6f17-bb56-49ba-8487-6e07346780a1","Type":"ContainerDied","Data":"87bcaa2b52f967df4d7cb67d7c4f5117d6253d2482ec76ad6ef22eaa91c61737"} Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.774510 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87bcaa2b52f967df4d7cb67d7c4f5117d6253d2482ec76ad6ef22eaa91c61737" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.785591 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f29362d-380a-46e7-b163-0ff42600d563-utilities\") pod \"redhat-operators-stflq\" (UID: \"9f29362d-380a-46e7-b163-0ff42600d563\") " pod="openshift-marketplace/redhat-operators-stflq" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.785676 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f29362d-380a-46e7-b163-0ff42600d563-catalog-content\") pod \"redhat-operators-stflq\" (UID: \"9f29362d-380a-46e7-b163-0ff42600d563\") " pod="openshift-marketplace/redhat-operators-stflq" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.785698 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj6gc\" (UniqueName: \"kubernetes.io/projected/9f29362d-380a-46e7-b163-0ff42600d563-kube-api-access-nj6gc\") pod \"redhat-operators-stflq\" (UID: \"9f29362d-380a-46e7-b163-0ff42600d563\") " pod="openshift-marketplace/redhat-operators-stflq" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.801398 4775 generic.go:334] "Generic (PLEG): container finished" podID="4927c747-c679-46bf-bcc6-485f87f885ab" containerID="314e3c9c844a6677c18f60414390ec85b7864dca6d7ccf08978dd36224f72f04" exitCode=0 Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.801490 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4927c747-c679-46bf-bcc6-485f87f885ab","Type":"ContainerDied","Data":"314e3c9c844a6677c18f60414390ec85b7864dca6d7ccf08978dd36224f72f04"} Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.803988 4775 generic.go:334] "Generic (PLEG): container finished" podID="e59d5724-424f-4151-98a4-c2cfa3918ac0" containerID="b99c9f768aa87908f3ac8df6adf51f693264f7a4696b77a222908931aa45eca9" exitCode=0 Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.804025 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q6l68" event={"ID":"e59d5724-424f-4151-98a4-c2cfa3918ac0","Type":"ContainerDied","Data":"b99c9f768aa87908f3ac8df6adf51f693264f7a4696b77a222908931aa45eca9"} Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.804044 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q6l68" event={"ID":"e59d5724-424f-4151-98a4-c2cfa3918ac0","Type":"ContainerStarted","Data":"26c35738c37491d0603ee348b5fe634ea59da9d48f5e4b15355f05e6dc983614"} Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.811930 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-998gd" event={"ID":"a25e2625-85e2-4f61-a654-347c5d111fc2","Type":"ContainerStarted","Data":"ca983591e9c5773d2d910396e97f6529e836009e39c2ca638887beada7a160d7"} Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.811985 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-998gd" event={"ID":"a25e2625-85e2-4f61-a654-347c5d111fc2","Type":"ContainerStarted","Data":"3ac2cbde2ce107b51f2fd46e9adae179e9362f5a9c3e49977d3cabfab8d5c7a8"} Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.838671 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f29362d-380a-46e7-b163-0ff42600d563-utilities\") pod \"redhat-operators-stflq\" (UID: \"9f29362d-380a-46e7-b163-0ff42600d563\") " pod="openshift-marketplace/redhat-operators-stflq" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.838954 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f29362d-380a-46e7-b163-0ff42600d563-catalog-content\") pod \"redhat-operators-stflq\" (UID: \"9f29362d-380a-46e7-b163-0ff42600d563\") " pod="openshift-marketplace/redhat-operators-stflq" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.839179 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-tsdcf" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.844825 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj6gc\" (UniqueName: \"kubernetes.io/projected/9f29362d-380a-46e7-b163-0ff42600d563-kube-api-access-nj6gc\") pod \"redhat-operators-stflq\" (UID: \"9f29362d-380a-46e7-b163-0ff42600d563\") " pod="openshift-marketplace/redhat-operators-stflq" Jan 23 14:06:51 crc kubenswrapper[4775]: I0123 14:06:51.894578 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-stflq" Jan 23 14:06:52 crc kubenswrapper[4775]: I0123 14:06:52.048746 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-84gx7"] Jan 23 14:06:52 crc kubenswrapper[4775]: W0123 14:06:52.096065 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e3253a9_fac0_401c_8e02_52758dbc40f3.slice/crio-15af52003ac596b61d4d000ce7f453341ef0c574add7e4ae39f4de44a23d82f4 WatchSource:0}: Error finding container 15af52003ac596b61d4d000ce7f453341ef0c574add7e4ae39f4de44a23d82f4: Status 404 returned error can't find the container with id 15af52003ac596b61d4d000ce7f453341ef0c574add7e4ae39f4de44a23d82f4 Jan 23 14:06:52 crc kubenswrapper[4775]: I0123 14:06:52.433477 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-stflq"] Jan 23 14:06:52 crc kubenswrapper[4775]: I0123 14:06:52.651639 4775 patch_prober.go:28] interesting pod/router-default-5444994796-nj2dd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 14:06:52 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Jan 23 14:06:52 crc kubenswrapper[4775]: [+]process-running ok Jan 23 14:06:52 crc kubenswrapper[4775]: healthz check failed Jan 23 14:06:52 crc kubenswrapper[4775]: I0123 14:06:52.651705 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nj2dd" podUID="381c20f8-ed2d-4aa8-b99b-5d85a6eb5526" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 14:06:52 crc kubenswrapper[4775]: I0123 14:06:52.828398 4775 generic.go:334] "Generic (PLEG): container finished" podID="9f29362d-380a-46e7-b163-0ff42600d563" containerID="8cf1d207d3c181ec1fe849262ab8dacc707e0308d2b5ce3e6df1a12ceacccc47" exitCode=0 Jan 23 14:06:52 crc kubenswrapper[4775]: I0123 14:06:52.828456 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-stflq" event={"ID":"9f29362d-380a-46e7-b163-0ff42600d563","Type":"ContainerDied","Data":"8cf1d207d3c181ec1fe849262ab8dacc707e0308d2b5ce3e6df1a12ceacccc47"} Jan 23 14:06:52 crc kubenswrapper[4775]: I0123 14:06:52.828482 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-stflq" event={"ID":"9f29362d-380a-46e7-b163-0ff42600d563","Type":"ContainerStarted","Data":"50edf2899c3c4bd4f94febab7dade88c7fd87dc6b2dfbbaffdba8627cd2c9677"} Jan 23 14:06:52 crc kubenswrapper[4775]: I0123 14:06:52.831402 4775 generic.go:334] "Generic (PLEG): container finished" podID="a25e2625-85e2-4f61-a654-347c5d111fc2" containerID="ca983591e9c5773d2d910396e97f6529e836009e39c2ca638887beada7a160d7" exitCode=0 Jan 23 14:06:52 crc kubenswrapper[4775]: I0123 14:06:52.831445 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-998gd" event={"ID":"a25e2625-85e2-4f61-a654-347c5d111fc2","Type":"ContainerDied","Data":"ca983591e9c5773d2d910396e97f6529e836009e39c2ca638887beada7a160d7"} Jan 23 14:06:52 crc kubenswrapper[4775]: I0123 14:06:52.832844 4775 generic.go:334] "Generic (PLEG): container finished" podID="0e3253a9-fac0-401c-8e02-52758dbc40f3" containerID="33e54abbac164ceea7f804e54924e8f9324295ef8959032204bb2d352664a565" exitCode=0 Jan 23 14:06:52 crc kubenswrapper[4775]: I0123 14:06:52.832981 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-84gx7" event={"ID":"0e3253a9-fac0-401c-8e02-52758dbc40f3","Type":"ContainerDied","Data":"33e54abbac164ceea7f804e54924e8f9324295ef8959032204bb2d352664a565"} Jan 23 14:06:52 crc kubenswrapper[4775]: I0123 14:06:52.833031 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-84gx7" event={"ID":"0e3253a9-fac0-401c-8e02-52758dbc40f3","Type":"ContainerStarted","Data":"15af52003ac596b61d4d000ce7f453341ef0c574add7e4ae39f4de44a23d82f4"} Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.226507 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.226581 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.454176 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.650692 4775 patch_prober.go:28] interesting pod/router-default-5444994796-nj2dd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 14:06:53 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Jan 23 14:06:53 crc kubenswrapper[4775]: [+]process-running ok Jan 23 14:06:53 crc kubenswrapper[4775]: healthz check failed Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.650766 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nj2dd" podUID="381c20f8-ed2d-4aa8-b99b-5d85a6eb5526" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.654487 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4927c747-c679-46bf-bcc6-485f87f885ab-kube-api-access\") pod \"4927c747-c679-46bf-bcc6-485f87f885ab\" (UID: \"4927c747-c679-46bf-bcc6-485f87f885ab\") " Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.654550 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4927c747-c679-46bf-bcc6-485f87f885ab-kubelet-dir\") pod \"4927c747-c679-46bf-bcc6-485f87f885ab\" (UID: \"4927c747-c679-46bf-bcc6-485f87f885ab\") " Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.654851 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4927c747-c679-46bf-bcc6-485f87f885ab-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4927c747-c679-46bf-bcc6-485f87f885ab" (UID: "4927c747-c679-46bf-bcc6-485f87f885ab"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.680354 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4927c747-c679-46bf-bcc6-485f87f885ab-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4927c747-c679-46bf-bcc6-485f87f885ab" (UID: "4927c747-c679-46bf-bcc6-485f87f885ab"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.745893 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 23 14:06:53 crc kubenswrapper[4775]: E0123 14:06:53.746181 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4927c747-c679-46bf-bcc6-485f87f885ab" containerName="pruner" Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.746199 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="4927c747-c679-46bf-bcc6-485f87f885ab" containerName="pruner" Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.746311 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="4927c747-c679-46bf-bcc6-485f87f885ab" containerName="pruner" Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.746633 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.746724 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.748440 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.748610 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.756204 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/266e861e-ba27-43d0-adfd-79b593bdb663-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"266e861e-ba27-43d0-adfd-79b593bdb663\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.756283 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/266e861e-ba27-43d0-adfd-79b593bdb663-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"266e861e-ba27-43d0-adfd-79b593bdb663\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.756351 4775 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4927c747-c679-46bf-bcc6-485f87f885ab-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.756364 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4927c747-c679-46bf-bcc6-485f87f885ab-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.850610 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4927c747-c679-46bf-bcc6-485f87f885ab","Type":"ContainerDied","Data":"c5e2cf7dc94bca19d391d27aa9b768b85ccfa71fad8a84b4ced6560f9dc08f72"} Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.850649 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.850657 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5e2cf7dc94bca19d391d27aa9b768b85ccfa71fad8a84b4ced6560f9dc08f72" Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.857501 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/266e861e-ba27-43d0-adfd-79b593bdb663-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"266e861e-ba27-43d0-adfd-79b593bdb663\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.857599 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/266e861e-ba27-43d0-adfd-79b593bdb663-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"266e861e-ba27-43d0-adfd-79b593bdb663\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.857637 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/266e861e-ba27-43d0-adfd-79b593bdb663-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"266e861e-ba27-43d0-adfd-79b593bdb663\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 14:06:53 crc kubenswrapper[4775]: I0123 14:06:53.875750 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/266e861e-ba27-43d0-adfd-79b593bdb663-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"266e861e-ba27-43d0-adfd-79b593bdb663\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 14:06:54 crc kubenswrapper[4775]: I0123 14:06:54.070889 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 14:06:54 crc kubenswrapper[4775]: I0123 14:06:54.313452 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 23 14:06:54 crc kubenswrapper[4775]: W0123 14:06:54.340959 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod266e861e_ba27_43d0_adfd_79b593bdb663.slice/crio-b62f54a2023c4313813368c02113d16054bb482ab67e8cc33302ffa88d68ab0a WatchSource:0}: Error finding container b62f54a2023c4313813368c02113d16054bb482ab67e8cc33302ffa88d68ab0a: Status 404 returned error can't find the container with id b62f54a2023c4313813368c02113d16054bb482ab67e8cc33302ffa88d68ab0a Jan 23 14:06:54 crc kubenswrapper[4775]: I0123 14:06:54.649745 4775 patch_prober.go:28] interesting pod/router-default-5444994796-nj2dd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 14:06:54 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Jan 23 14:06:54 crc kubenswrapper[4775]: [+]process-running ok Jan 23 14:06:54 crc kubenswrapper[4775]: healthz check failed Jan 23 14:06:54 crc kubenswrapper[4775]: I0123 14:06:54.649895 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nj2dd" podUID="381c20f8-ed2d-4aa8-b99b-5d85a6eb5526" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 14:06:54 crc kubenswrapper[4775]: I0123 14:06:54.857549 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"266e861e-ba27-43d0-adfd-79b593bdb663","Type":"ContainerStarted","Data":"b62f54a2023c4313813368c02113d16054bb482ab67e8cc33302ffa88d68ab0a"} Jan 23 14:06:55 crc kubenswrapper[4775]: I0123 14:06:55.562695 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:55 crc kubenswrapper[4775]: I0123 14:06:55.567387 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-mc4h4" Jan 23 14:06:55 crc kubenswrapper[4775]: I0123 14:06:55.655025 4775 patch_prober.go:28] interesting pod/router-default-5444994796-nj2dd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 14:06:55 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Jan 23 14:06:55 crc kubenswrapper[4775]: [+]process-running ok Jan 23 14:06:55 crc kubenswrapper[4775]: healthz check failed Jan 23 14:06:55 crc kubenswrapper[4775]: I0123 14:06:55.655081 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nj2dd" podUID="381c20f8-ed2d-4aa8-b99b-5d85a6eb5526" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 14:06:56 crc kubenswrapper[4775]: I0123 14:06:56.678498 4775 patch_prober.go:28] interesting pod/router-default-5444994796-nj2dd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 14:06:56 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Jan 23 14:06:56 crc kubenswrapper[4775]: [+]process-running ok Jan 23 14:06:56 crc kubenswrapper[4775]: healthz check failed Jan 23 14:06:56 crc kubenswrapper[4775]: I0123 14:06:56.678750 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nj2dd" podUID="381c20f8-ed2d-4aa8-b99b-5d85a6eb5526" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 14:06:56 crc kubenswrapper[4775]: I0123 14:06:56.908270 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-bvqqf" Jan 23 14:06:56 crc kubenswrapper[4775]: I0123 14:06:56.916993 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"266e861e-ba27-43d0-adfd-79b593bdb663","Type":"ContainerStarted","Data":"3134730bce153d56131545a3a9d6e4f71faffb1f17d6451fcb3d28adca9ec8ec"} Jan 23 14:06:56 crc kubenswrapper[4775]: I0123 14:06:56.957242 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.957225015 podStartE2EDuration="3.957225015s" podCreationTimestamp="2026-01-23 14:06:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:06:56.953313018 +0000 UTC m=+163.948141758" watchObservedRunningTime="2026-01-23 14:06:56.957225015 +0000 UTC m=+163.952053755" Jan 23 14:06:57 crc kubenswrapper[4775]: I0123 14:06:57.648859 4775 patch_prober.go:28] interesting pod/router-default-5444994796-nj2dd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 14:06:57 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Jan 23 14:06:57 crc kubenswrapper[4775]: [+]process-running ok Jan 23 14:06:57 crc kubenswrapper[4775]: healthz check failed Jan 23 14:06:57 crc kubenswrapper[4775]: I0123 14:06:57.648917 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nj2dd" podUID="381c20f8-ed2d-4aa8-b99b-5d85a6eb5526" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 14:06:57 crc kubenswrapper[4775]: I0123 14:06:57.923955 4775 generic.go:334] "Generic (PLEG): container finished" podID="266e861e-ba27-43d0-adfd-79b593bdb663" containerID="3134730bce153d56131545a3a9d6e4f71faffb1f17d6451fcb3d28adca9ec8ec" exitCode=0 Jan 23 14:06:57 crc kubenswrapper[4775]: I0123 14:06:57.924014 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"266e861e-ba27-43d0-adfd-79b593bdb663","Type":"ContainerDied","Data":"3134730bce153d56131545a3a9d6e4f71faffb1f17d6451fcb3d28adca9ec8ec"} Jan 23 14:06:58 crc kubenswrapper[4775]: I0123 14:06:58.649770 4775 patch_prober.go:28] interesting pod/router-default-5444994796-nj2dd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 14:06:58 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Jan 23 14:06:58 crc kubenswrapper[4775]: [+]process-running ok Jan 23 14:06:58 crc kubenswrapper[4775]: healthz check failed Jan 23 14:06:58 crc kubenswrapper[4775]: I0123 14:06:58.649850 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nj2dd" podUID="381c20f8-ed2d-4aa8-b99b-5d85a6eb5526" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 14:06:58 crc kubenswrapper[4775]: I0123 14:06:58.748566 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs\") pod \"network-metrics-daemon-47lz2\" (UID: \"63ed1a97-c97e-40d0-afdf-260c475dc83f\") " pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:06:58 crc kubenswrapper[4775]: I0123 14:06:58.754871 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/63ed1a97-c97e-40d0-afdf-260c475dc83f-metrics-certs\") pod \"network-metrics-daemon-47lz2\" (UID: \"63ed1a97-c97e-40d0-afdf-260c475dc83f\") " pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:06:59 crc kubenswrapper[4775]: I0123 14:06:59.005497 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-47lz2" Jan 23 14:06:59 crc kubenswrapper[4775]: I0123 14:06:59.322624 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 14:06:59 crc kubenswrapper[4775]: I0123 14:06:59.375524 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-47lz2"] Jan 23 14:06:59 crc kubenswrapper[4775]: W0123 14:06:59.386282 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63ed1a97_c97e_40d0_afdf_260c475dc83f.slice/crio-90cb5916be883c63dde6196ad162c19199860a7014a304debd1893faed3e0073 WatchSource:0}: Error finding container 90cb5916be883c63dde6196ad162c19199860a7014a304debd1893faed3e0073: Status 404 returned error can't find the container with id 90cb5916be883c63dde6196ad162c19199860a7014a304debd1893faed3e0073 Jan 23 14:06:59 crc kubenswrapper[4775]: I0123 14:06:59.410872 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/266e861e-ba27-43d0-adfd-79b593bdb663-kube-api-access\") pod \"266e861e-ba27-43d0-adfd-79b593bdb663\" (UID: \"266e861e-ba27-43d0-adfd-79b593bdb663\") " Jan 23 14:06:59 crc kubenswrapper[4775]: I0123 14:06:59.410936 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/266e861e-ba27-43d0-adfd-79b593bdb663-kubelet-dir\") pod \"266e861e-ba27-43d0-adfd-79b593bdb663\" (UID: \"266e861e-ba27-43d0-adfd-79b593bdb663\") " Jan 23 14:06:59 crc kubenswrapper[4775]: I0123 14:06:59.411420 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/266e861e-ba27-43d0-adfd-79b593bdb663-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "266e861e-ba27-43d0-adfd-79b593bdb663" (UID: "266e861e-ba27-43d0-adfd-79b593bdb663"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:06:59 crc kubenswrapper[4775]: I0123 14:06:59.421191 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/266e861e-ba27-43d0-adfd-79b593bdb663-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "266e861e-ba27-43d0-adfd-79b593bdb663" (UID: "266e861e-ba27-43d0-adfd-79b593bdb663"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:06:59 crc kubenswrapper[4775]: I0123 14:06:59.512524 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/266e861e-ba27-43d0-adfd-79b593bdb663-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 14:06:59 crc kubenswrapper[4775]: I0123 14:06:59.512559 4775 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/266e861e-ba27-43d0-adfd-79b593bdb663-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 14:06:59 crc kubenswrapper[4775]: I0123 14:06:59.649610 4775 patch_prober.go:28] interesting pod/router-default-5444994796-nj2dd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 14:06:59 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Jan 23 14:06:59 crc kubenswrapper[4775]: [+]process-running ok Jan 23 14:06:59 crc kubenswrapper[4775]: healthz check failed Jan 23 14:06:59 crc kubenswrapper[4775]: I0123 14:06:59.649666 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nj2dd" podUID="381c20f8-ed2d-4aa8-b99b-5d85a6eb5526" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 14:07:00 crc kubenswrapper[4775]: I0123 14:07:00.018751 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"266e861e-ba27-43d0-adfd-79b593bdb663","Type":"ContainerDied","Data":"b62f54a2023c4313813368c02113d16054bb482ab67e8cc33302ffa88d68ab0a"} Jan 23 14:07:00 crc kubenswrapper[4775]: I0123 14:07:00.018791 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b62f54a2023c4313813368c02113d16054bb482ab67e8cc33302ffa88d68ab0a" Jan 23 14:07:00 crc kubenswrapper[4775]: I0123 14:07:00.018867 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 23 14:07:00 crc kubenswrapper[4775]: I0123 14:07:00.020853 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-47lz2" event={"ID":"63ed1a97-c97e-40d0-afdf-260c475dc83f","Type":"ContainerStarted","Data":"90cb5916be883c63dde6196ad162c19199860a7014a304debd1893faed3e0073"} Jan 23 14:07:00 crc kubenswrapper[4775]: I0123 14:07:00.621102 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-mvqcg" Jan 23 14:07:00 crc kubenswrapper[4775]: I0123 14:07:00.650907 4775 patch_prober.go:28] interesting pod/router-default-5444994796-nj2dd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 14:07:00 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Jan 23 14:07:00 crc kubenswrapper[4775]: [+]process-running ok Jan 23 14:07:00 crc kubenswrapper[4775]: healthz check failed Jan 23 14:07:00 crc kubenswrapper[4775]: I0123 14:07:00.650961 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nj2dd" podUID="381c20f8-ed2d-4aa8-b99b-5d85a6eb5526" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 14:07:01 crc kubenswrapper[4775]: I0123 14:07:01.033205 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-47lz2" event={"ID":"63ed1a97-c97e-40d0-afdf-260c475dc83f","Type":"ContainerStarted","Data":"8bd9ffc421e594fa14511a7227054cc0cea122e754a97d6f09b8248a3fe1948a"} Jan 23 14:07:01 crc kubenswrapper[4775]: I0123 14:07:01.596377 4775 patch_prober.go:28] interesting pod/console-f9d7485db-fgb82 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Jan 23 14:07:01 crc kubenswrapper[4775]: I0123 14:07:01.596466 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-fgb82" podUID="a6821f92-2d15-4dc0-92ed-7a30cef98db9" containerName="console" probeResult="failure" output="Get \"https://10.217.0.28:8443/health\": dial tcp 10.217.0.28:8443: connect: connection refused" Jan 23 14:07:01 crc kubenswrapper[4775]: I0123 14:07:01.652093 4775 patch_prober.go:28] interesting pod/router-default-5444994796-nj2dd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 14:07:01 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Jan 23 14:07:01 crc kubenswrapper[4775]: [+]process-running ok Jan 23 14:07:01 crc kubenswrapper[4775]: healthz check failed Jan 23 14:07:01 crc kubenswrapper[4775]: I0123 14:07:01.652505 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nj2dd" podUID="381c20f8-ed2d-4aa8-b99b-5d85a6eb5526" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 14:07:02 crc kubenswrapper[4775]: I0123 14:07:02.650389 4775 patch_prober.go:28] interesting pod/router-default-5444994796-nj2dd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 14:07:02 crc kubenswrapper[4775]: [-]has-synced failed: reason withheld Jan 23 14:07:02 crc kubenswrapper[4775]: [+]process-running ok Jan 23 14:07:02 crc kubenswrapper[4775]: healthz check failed Jan 23 14:07:02 crc kubenswrapper[4775]: I0123 14:07:02.650463 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nj2dd" podUID="381c20f8-ed2d-4aa8-b99b-5d85a6eb5526" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 14:07:03 crc kubenswrapper[4775]: I0123 14:07:03.649829 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-nj2dd" Jan 23 14:07:03 crc kubenswrapper[4775]: I0123 14:07:03.653023 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-nj2dd" Jan 23 14:07:06 crc kubenswrapper[4775]: I0123 14:07:06.612354 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fp8bb"] Jan 23 14:07:06 crc kubenswrapper[4775]: I0123 14:07:06.612737 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" podUID="4a8470cc-442d-4efc-91a2-af7e4fe75b3a" containerName="controller-manager" containerID="cri-o://f4b1eb7532640c0119fea3d1dd873eab326ab51390a8e59dcd343707c94098b9" gracePeriod=30 Jan 23 14:07:06 crc kubenswrapper[4775]: I0123 14:07:06.623738 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn"] Jan 23 14:07:06 crc kubenswrapper[4775]: I0123 14:07:06.623983 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" podUID="a9a77e3c-0e93-45f9-ab81-7dfbd2916588" containerName="route-controller-manager" containerID="cri-o://0180d579f234a3f26f7595abf341e660581404c07fa388dc580f716a183ffec5" gracePeriod=30 Jan 23 14:07:07 crc kubenswrapper[4775]: I0123 14:07:07.100775 4775 generic.go:334] "Generic (PLEG): container finished" podID="4a8470cc-442d-4efc-91a2-af7e4fe75b3a" containerID="f4b1eb7532640c0119fea3d1dd873eab326ab51390a8e59dcd343707c94098b9" exitCode=0 Jan 23 14:07:07 crc kubenswrapper[4775]: I0123 14:07:07.100869 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" event={"ID":"4a8470cc-442d-4efc-91a2-af7e4fe75b3a","Type":"ContainerDied","Data":"f4b1eb7532640c0119fea3d1dd873eab326ab51390a8e59dcd343707c94098b9"} Jan 23 14:07:07 crc kubenswrapper[4775]: I0123 14:07:07.102609 4775 generic.go:334] "Generic (PLEG): container finished" podID="a9a77e3c-0e93-45f9-ab81-7dfbd2916588" containerID="0180d579f234a3f26f7595abf341e660581404c07fa388dc580f716a183ffec5" exitCode=0 Jan 23 14:07:07 crc kubenswrapper[4775]: I0123 14:07:07.102649 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" event={"ID":"a9a77e3c-0e93-45f9-ab81-7dfbd2916588","Type":"ContainerDied","Data":"0180d579f234a3f26f7595abf341e660581404c07fa388dc580f716a183ffec5"} Jan 23 14:07:09 crc kubenswrapper[4775]: I0123 14:07:09.783909 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:07:10 crc kubenswrapper[4775]: I0123 14:07:10.138090 4775 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-fp8bb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.49:8443/healthz\": dial tcp 10.217.0.49:8443: connect: connection refused" start-of-body= Jan 23 14:07:10 crc kubenswrapper[4775]: I0123 14:07:10.138165 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" podUID="4a8470cc-442d-4efc-91a2-af7e4fe75b3a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.49:8443/healthz\": dial tcp 10.217.0.49:8443: connect: connection refused" Jan 23 14:07:10 crc kubenswrapper[4775]: I0123 14:07:10.571624 4775 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-lqcpn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 23 14:07:10 crc kubenswrapper[4775]: I0123 14:07:10.571670 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" podUID="a9a77e3c-0e93-45f9-ab81-7dfbd2916588" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 23 14:07:11 crc kubenswrapper[4775]: I0123 14:07:11.660773 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:07:11 crc kubenswrapper[4775]: I0123 14:07:11.669142 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:07:21 crc kubenswrapper[4775]: I0123 14:07:21.138386 4775 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-fp8bb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.49:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 14:07:21 crc kubenswrapper[4775]: I0123 14:07:21.139199 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" podUID="4a8470cc-442d-4efc-91a2-af7e4fe75b3a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.49:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 14:07:21 crc kubenswrapper[4775]: I0123 14:07:21.570319 4775 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-lqcpn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 14:07:21 crc kubenswrapper[4775]: I0123 14:07:21.570427 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" podUID="a9a77e3c-0e93-45f9-ab81-7dfbd2916588" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 14:07:22 crc kubenswrapper[4775]: I0123 14:07:22.074579 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-lssd6" Jan 23 14:07:23 crc kubenswrapper[4775]: I0123 14:07:23.218669 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:07:23 crc kubenswrapper[4775]: I0123 14:07:23.219062 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:07:25 crc kubenswrapper[4775]: I0123 14:07:25.596446 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 23 14:07:28 crc kubenswrapper[4775]: E0123 14:07:28.293616 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 23 14:07:28 crc kubenswrapper[4775]: E0123 14:07:28.294117 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dhknv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-pphm8_openshift-marketplace(1a627ae2-fe8d-403e-9d14-3c3ace588da5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 14:07:28 crc kubenswrapper[4775]: E0123 14:07:28.295284 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-pphm8" podUID="1a627ae2-fe8d-403e-9d14-3c3ace588da5" Jan 23 14:07:29 crc kubenswrapper[4775]: I0123 14:07:29.122090 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 23 14:07:29 crc kubenswrapper[4775]: E0123 14:07:29.122407 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="266e861e-ba27-43d0-adfd-79b593bdb663" containerName="pruner" Jan 23 14:07:29 crc kubenswrapper[4775]: I0123 14:07:29.122428 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="266e861e-ba27-43d0-adfd-79b593bdb663" containerName="pruner" Jan 23 14:07:29 crc kubenswrapper[4775]: I0123 14:07:29.122564 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="266e861e-ba27-43d0-adfd-79b593bdb663" containerName="pruner" Jan 23 14:07:29 crc kubenswrapper[4775]: I0123 14:07:29.123058 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 14:07:29 crc kubenswrapper[4775]: I0123 14:07:29.125156 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 23 14:07:29 crc kubenswrapper[4775]: I0123 14:07:29.128827 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 23 14:07:29 crc kubenswrapper[4775]: I0123 14:07:29.129422 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 23 14:07:29 crc kubenswrapper[4775]: I0123 14:07:29.276899 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 14:07:29 crc kubenswrapper[4775]: I0123 14:07:29.277142 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 14:07:29 crc kubenswrapper[4775]: I0123 14:07:29.378585 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 14:07:29 crc kubenswrapper[4775]: I0123 14:07:29.378694 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 14:07:29 crc kubenswrapper[4775]: I0123 14:07:29.378771 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 14:07:29 crc kubenswrapper[4775]: I0123 14:07:29.397399 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 14:07:29 crc kubenswrapper[4775]: I0123 14:07:29.494131 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 14:07:31 crc kubenswrapper[4775]: I0123 14:07:31.137093 4775 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-fp8bb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.49:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 14:07:31 crc kubenswrapper[4775]: I0123 14:07:31.137168 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" podUID="4a8470cc-442d-4efc-91a2-af7e4fe75b3a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.49:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 14:07:31 crc kubenswrapper[4775]: I0123 14:07:31.571000 4775 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-lqcpn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 14:07:31 crc kubenswrapper[4775]: I0123 14:07:31.571436 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" podUID="a9a77e3c-0e93-45f9-ab81-7dfbd2916588" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 14:07:34 crc kubenswrapper[4775]: I0123 14:07:34.126187 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 23 14:07:34 crc kubenswrapper[4775]: I0123 14:07:34.128180 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 14:07:34 crc kubenswrapper[4775]: I0123 14:07:34.131423 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 23 14:07:34 crc kubenswrapper[4775]: E0123 14:07:34.134450 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-pphm8" podUID="1a627ae2-fe8d-403e-9d14-3c3ace588da5" Jan 23 14:07:34 crc kubenswrapper[4775]: I0123 14:07:34.326005 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0d34b3f-ebda-4e48-82ec-36db9214c42a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b0d34b3f-ebda-4e48-82ec-36db9214c42a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 14:07:34 crc kubenswrapper[4775]: I0123 14:07:34.326061 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b0d34b3f-ebda-4e48-82ec-36db9214c42a-var-lock\") pod \"installer-9-crc\" (UID: \"b0d34b3f-ebda-4e48-82ec-36db9214c42a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 14:07:34 crc kubenswrapper[4775]: I0123 14:07:34.326097 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0d34b3f-ebda-4e48-82ec-36db9214c42a-kube-api-access\") pod \"installer-9-crc\" (UID: \"b0d34b3f-ebda-4e48-82ec-36db9214c42a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 14:07:34 crc kubenswrapper[4775]: E0123 14:07:34.395814 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 23 14:07:34 crc kubenswrapper[4775]: E0123 14:07:34.395944 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mds26,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-998gd_openshift-marketplace(a25e2625-85e2-4f61-a654-347c5d111fc2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 14:07:34 crc kubenswrapper[4775]: E0123 14:07:34.397304 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-998gd" podUID="a25e2625-85e2-4f61-a654-347c5d111fc2" Jan 23 14:07:34 crc kubenswrapper[4775]: I0123 14:07:34.427635 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0d34b3f-ebda-4e48-82ec-36db9214c42a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b0d34b3f-ebda-4e48-82ec-36db9214c42a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 14:07:34 crc kubenswrapper[4775]: I0123 14:07:34.427676 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b0d34b3f-ebda-4e48-82ec-36db9214c42a-var-lock\") pod \"installer-9-crc\" (UID: \"b0d34b3f-ebda-4e48-82ec-36db9214c42a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 14:07:34 crc kubenswrapper[4775]: I0123 14:07:34.427701 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0d34b3f-ebda-4e48-82ec-36db9214c42a-kube-api-access\") pod \"installer-9-crc\" (UID: \"b0d34b3f-ebda-4e48-82ec-36db9214c42a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 14:07:34 crc kubenswrapper[4775]: I0123 14:07:34.427733 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0d34b3f-ebda-4e48-82ec-36db9214c42a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b0d34b3f-ebda-4e48-82ec-36db9214c42a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 14:07:34 crc kubenswrapper[4775]: I0123 14:07:34.427781 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b0d34b3f-ebda-4e48-82ec-36db9214c42a-var-lock\") pod \"installer-9-crc\" (UID: \"b0d34b3f-ebda-4e48-82ec-36db9214c42a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 14:07:34 crc kubenswrapper[4775]: I0123 14:07:34.449779 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0d34b3f-ebda-4e48-82ec-36db9214c42a-kube-api-access\") pod \"installer-9-crc\" (UID: \"b0d34b3f-ebda-4e48-82ec-36db9214c42a\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 23 14:07:34 crc kubenswrapper[4775]: I0123 14:07:34.463506 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 14:07:34 crc kubenswrapper[4775]: E0123 14:07:34.713180 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 23 14:07:34 crc kubenswrapper[4775]: E0123 14:07:34.713850 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-phm66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-q6l68_openshift-marketplace(e59d5724-424f-4151-98a4-c2cfa3918ac0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 14:07:34 crc kubenswrapper[4775]: E0123 14:07:34.715173 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-q6l68" podUID="e59d5724-424f-4151-98a4-c2cfa3918ac0" Jan 23 14:07:38 crc kubenswrapper[4775]: E0123 14:07:38.330851 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-998gd" podUID="a25e2625-85e2-4f61-a654-347c5d111fc2" Jan 23 14:07:38 crc kubenswrapper[4775]: E0123 14:07:38.330851 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-q6l68" podUID="e59d5724-424f-4151-98a4-c2cfa3918ac0" Jan 23 14:07:39 crc kubenswrapper[4775]: E0123 14:07:39.890516 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 23 14:07:39 crc kubenswrapper[4775]: E0123 14:07:39.890676 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w4wm7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-hdhzj_openshift-marketplace(945aeb53-25e2-4666-8fbe-a12be2948454): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 14:07:39 crc kubenswrapper[4775]: E0123 14:07:39.891879 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-hdhzj" podUID="945aeb53-25e2-4666-8fbe-a12be2948454" Jan 23 14:07:41 crc kubenswrapper[4775]: I0123 14:07:41.137686 4775 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-fp8bb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.49:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 14:07:41 crc kubenswrapper[4775]: I0123 14:07:41.137754 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" podUID="4a8470cc-442d-4efc-91a2-af7e4fe75b3a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.49:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 14:07:41 crc kubenswrapper[4775]: I0123 14:07:41.571991 4775 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-lqcpn container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 14:07:41 crc kubenswrapper[4775]: I0123 14:07:41.572071 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" podUID="a9a77e3c-0e93-45f9-ab81-7dfbd2916588" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 14:07:44 crc kubenswrapper[4775]: E0123 14:07:44.119470 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-hdhzj" podUID="945aeb53-25e2-4666-8fbe-a12be2948454" Jan 23 14:07:44 crc kubenswrapper[4775]: E0123 14:07:44.250565 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 23 14:07:44 crc kubenswrapper[4775]: E0123 14:07:44.250738 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nj6gc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-stflq_openshift-marketplace(9f29362d-380a-46e7-b163-0ff42600d563): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 14:07:44 crc kubenswrapper[4775]: E0123 14:07:44.251905 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-stflq" podUID="9f29362d-380a-46e7-b163-0ff42600d563" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.254725 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.277267 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.290369 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp"] Jan 23 14:07:44 crc kubenswrapper[4775]: E0123 14:07:44.290737 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a8470cc-442d-4efc-91a2-af7e4fe75b3a" containerName="controller-manager" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.290748 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a8470cc-442d-4efc-91a2-af7e4fe75b3a" containerName="controller-manager" Jan 23 14:07:44 crc kubenswrapper[4775]: E0123 14:07:44.290762 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9a77e3c-0e93-45f9-ab81-7dfbd2916588" containerName="route-controller-manager" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.290768 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9a77e3c-0e93-45f9-ab81-7dfbd2916588" containerName="route-controller-manager" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.290870 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9a77e3c-0e93-45f9-ab81-7dfbd2916588" containerName="route-controller-manager" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.290915 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a8470cc-442d-4efc-91a2-af7e4fe75b3a" containerName="controller-manager" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.291242 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.298745 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-proxy-ca-bundles\") pod \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\" (UID: \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\") " Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.298775 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-serving-cert\") pod \"a9a77e3c-0e93-45f9-ab81-7dfbd2916588\" (UID: \"a9a77e3c-0e93-45f9-ab81-7dfbd2916588\") " Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.298794 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-client-ca\") pod \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\" (UID: \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\") " Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.298849 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsv7w\" (UniqueName: \"kubernetes.io/projected/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-kube-api-access-rsv7w\") pod \"a9a77e3c-0e93-45f9-ab81-7dfbd2916588\" (UID: \"a9a77e3c-0e93-45f9-ab81-7dfbd2916588\") " Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.298869 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccjkp\" (UniqueName: \"kubernetes.io/projected/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-kube-api-access-ccjkp\") pod \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\" (UID: \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\") " Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.298914 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-config\") pod \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\" (UID: \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\") " Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.298929 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-config\") pod \"a9a77e3c-0e93-45f9-ab81-7dfbd2916588\" (UID: \"a9a77e3c-0e93-45f9-ab81-7dfbd2916588\") " Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.298955 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-serving-cert\") pod \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\" (UID: \"4a8470cc-442d-4efc-91a2-af7e4fe75b3a\") " Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.298996 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-client-ca\") pod \"a9a77e3c-0e93-45f9-ab81-7dfbd2916588\" (UID: \"a9a77e3c-0e93-45f9-ab81-7dfbd2916588\") " Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.299091 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/303477e6-d4ac-4cbc-a088-3d7754129bd4-serving-cert\") pod \"route-controller-manager-654598bdc5-jqdkp\" (UID: \"303477e6-d4ac-4cbc-a088-3d7754129bd4\") " pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.299126 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/303477e6-d4ac-4cbc-a088-3d7754129bd4-client-ca\") pod \"route-controller-manager-654598bdc5-jqdkp\" (UID: \"303477e6-d4ac-4cbc-a088-3d7754129bd4\") " pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.299150 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8kcw\" (UniqueName: \"kubernetes.io/projected/303477e6-d4ac-4cbc-a088-3d7754129bd4-kube-api-access-l8kcw\") pod \"route-controller-manager-654598bdc5-jqdkp\" (UID: \"303477e6-d4ac-4cbc-a088-3d7754129bd4\") " pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.299212 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/303477e6-d4ac-4cbc-a088-3d7754129bd4-config\") pod \"route-controller-manager-654598bdc5-jqdkp\" (UID: \"303477e6-d4ac-4cbc-a088-3d7754129bd4\") " pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.302342 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-client-ca" (OuterVolumeSpecName: "client-ca") pod "a9a77e3c-0e93-45f9-ab81-7dfbd2916588" (UID: "a9a77e3c-0e93-45f9-ab81-7dfbd2916588"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.303245 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-config" (OuterVolumeSpecName: "config") pod "4a8470cc-442d-4efc-91a2-af7e4fe75b3a" (UID: "4a8470cc-442d-4efc-91a2-af7e4fe75b3a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.306862 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-client-ca" (OuterVolumeSpecName: "client-ca") pod "4a8470cc-442d-4efc-91a2-af7e4fe75b3a" (UID: "4a8470cc-442d-4efc-91a2-af7e4fe75b3a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.306979 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-config" (OuterVolumeSpecName: "config") pod "a9a77e3c-0e93-45f9-ab81-7dfbd2916588" (UID: "a9a77e3c-0e93-45f9-ab81-7dfbd2916588"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.307833 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4a8470cc-442d-4efc-91a2-af7e4fe75b3a" (UID: "4a8470cc-442d-4efc-91a2-af7e4fe75b3a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.312067 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a9a77e3c-0e93-45f9-ab81-7dfbd2916588" (UID: "a9a77e3c-0e93-45f9-ab81-7dfbd2916588"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.313270 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4a8470cc-442d-4efc-91a2-af7e4fe75b3a" (UID: "4a8470cc-442d-4efc-91a2-af7e4fe75b3a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.319027 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-kube-api-access-ccjkp" (OuterVolumeSpecName: "kube-api-access-ccjkp") pod "4a8470cc-442d-4efc-91a2-af7e4fe75b3a" (UID: "4a8470cc-442d-4efc-91a2-af7e4fe75b3a"). InnerVolumeSpecName "kube-api-access-ccjkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.320425 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-kube-api-access-rsv7w" (OuterVolumeSpecName: "kube-api-access-rsv7w") pod "a9a77e3c-0e93-45f9-ab81-7dfbd2916588" (UID: "a9a77e3c-0e93-45f9-ab81-7dfbd2916588"). InnerVolumeSpecName "kube-api-access-rsv7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.331334 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp"] Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.336660 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" event={"ID":"a9a77e3c-0e93-45f9-ab81-7dfbd2916588","Type":"ContainerDied","Data":"126d7f9344248499833b2fa9bffa79374396f9b7ca1fc1c07f0f0a3674655194"} Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.336692 4775 scope.go:117] "RemoveContainer" containerID="0180d579f234a3f26f7595abf341e660581404c07fa388dc580f716a183ffec5" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.336781 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.342963 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" event={"ID":"4a8470cc-442d-4efc-91a2-af7e4fe75b3a","Type":"ContainerDied","Data":"971aa15dc628c22efdf895129c598854abbaff49521d3e188678eecd5ae7782c"} Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.343066 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-fp8bb" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.363073 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn"] Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.370468 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lqcpn"] Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.372753 4775 scope.go:117] "RemoveContainer" containerID="f4b1eb7532640c0119fea3d1dd873eab326ab51390a8e59dcd343707c94098b9" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.374727 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fp8bb"] Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.379871 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-fp8bb"] Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.399834 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/303477e6-d4ac-4cbc-a088-3d7754129bd4-client-ca\") pod \"route-controller-manager-654598bdc5-jqdkp\" (UID: \"303477e6-d4ac-4cbc-a088-3d7754129bd4\") " pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.399882 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8kcw\" (UniqueName: \"kubernetes.io/projected/303477e6-d4ac-4cbc-a088-3d7754129bd4-kube-api-access-l8kcw\") pod \"route-controller-manager-654598bdc5-jqdkp\" (UID: \"303477e6-d4ac-4cbc-a088-3d7754129bd4\") " pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.399923 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/303477e6-d4ac-4cbc-a088-3d7754129bd4-config\") pod \"route-controller-manager-654598bdc5-jqdkp\" (UID: \"303477e6-d4ac-4cbc-a088-3d7754129bd4\") " pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.401043 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/303477e6-d4ac-4cbc-a088-3d7754129bd4-client-ca\") pod \"route-controller-manager-654598bdc5-jqdkp\" (UID: \"303477e6-d4ac-4cbc-a088-3d7754129bd4\") " pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.401159 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/303477e6-d4ac-4cbc-a088-3d7754129bd4-config\") pod \"route-controller-manager-654598bdc5-jqdkp\" (UID: \"303477e6-d4ac-4cbc-a088-3d7754129bd4\") " pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.401654 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/303477e6-d4ac-4cbc-a088-3d7754129bd4-serving-cert\") pod \"route-controller-manager-654598bdc5-jqdkp\" (UID: \"303477e6-d4ac-4cbc-a088-3d7754129bd4\") " pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.401735 4775 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.401749 4775 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.401759 4775 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.401767 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.401775 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsv7w\" (UniqueName: \"kubernetes.io/projected/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-kube-api-access-rsv7w\") on node \"crc\" DevicePath \"\"" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.401784 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccjkp\" (UniqueName: \"kubernetes.io/projected/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-kube-api-access-ccjkp\") on node \"crc\" DevicePath \"\"" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.401793 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.401820 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9a77e3c-0e93-45f9-ab81-7dfbd2916588-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.401828 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a8470cc-442d-4efc-91a2-af7e4fe75b3a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.408324 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/303477e6-d4ac-4cbc-a088-3d7754129bd4-serving-cert\") pod \"route-controller-manager-654598bdc5-jqdkp\" (UID: \"303477e6-d4ac-4cbc-a088-3d7754129bd4\") " pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.415580 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.417147 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8kcw\" (UniqueName: \"kubernetes.io/projected/303477e6-d4ac-4cbc-a088-3d7754129bd4-kube-api-access-l8kcw\") pod \"route-controller-manager-654598bdc5-jqdkp\" (UID: \"303477e6-d4ac-4cbc-a088-3d7754129bd4\") " pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" Jan 23 14:07:44 crc kubenswrapper[4775]: W0123 14:07:44.430758 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podb0d34b3f_ebda_4e48_82ec_36db9214c42a.slice/crio-50a3207c43535211cc781efbf364abe05d4043fb9f6a837131123ef8444aee37 WatchSource:0}: Error finding container 50a3207c43535211cc781efbf364abe05d4043fb9f6a837131123ef8444aee37: Status 404 returned error can't find the container with id 50a3207c43535211cc781efbf364abe05d4043fb9f6a837131123ef8444aee37 Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.613091 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" Jan 23 14:07:44 crc kubenswrapper[4775]: I0123 14:07:44.666694 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 23 14:07:45 crc kubenswrapper[4775]: I0123 14:07:45.353135 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b0d34b3f-ebda-4e48-82ec-36db9214c42a","Type":"ContainerStarted","Data":"50a3207c43535211cc781efbf364abe05d4043fb9f6a837131123ef8444aee37"} Jan 23 14:07:45 crc kubenswrapper[4775]: I0123 14:07:45.722044 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a8470cc-442d-4efc-91a2-af7e4fe75b3a" path="/var/lib/kubelet/pods/4a8470cc-442d-4efc-91a2-af7e4fe75b3a/volumes" Jan 23 14:07:45 crc kubenswrapper[4775]: I0123 14:07:45.723859 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9a77e3c-0e93-45f9-ab81-7dfbd2916588" path="/var/lib/kubelet/pods/a9a77e3c-0e93-45f9-ab81-7dfbd2916588/volumes" Jan 23 14:07:46 crc kubenswrapper[4775]: I0123 14:07:46.976471 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7fc4d79794-zptsb"] Jan 23 14:07:46 crc kubenswrapper[4775]: I0123 14:07:46.980015 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" Jan 23 14:07:46 crc kubenswrapper[4775]: I0123 14:07:46.983544 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 14:07:46 crc kubenswrapper[4775]: I0123 14:07:46.983949 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 14:07:46 crc kubenswrapper[4775]: I0123 14:07:46.984621 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 14:07:46 crc kubenswrapper[4775]: I0123 14:07:46.984980 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 14:07:46 crc kubenswrapper[4775]: I0123 14:07:46.986154 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 14:07:47 crc kubenswrapper[4775]: I0123 14:07:46.989061 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7fc4d79794-zptsb"] Jan 23 14:07:47 crc kubenswrapper[4775]: I0123 14:07:47.011007 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 14:07:47 crc kubenswrapper[4775]: I0123 14:07:47.013762 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 14:07:47 crc kubenswrapper[4775]: I0123 14:07:47.035226 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db514f53-7687-42b7-b6bb-edc7208361d6-config\") pod \"controller-manager-7fc4d79794-zptsb\" (UID: \"db514f53-7687-42b7-b6bb-edc7208361d6\") " pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" Jan 23 14:07:47 crc kubenswrapper[4775]: I0123 14:07:47.035284 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db514f53-7687-42b7-b6bb-edc7208361d6-proxy-ca-bundles\") pod \"controller-manager-7fc4d79794-zptsb\" (UID: \"db514f53-7687-42b7-b6bb-edc7208361d6\") " pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" Jan 23 14:07:47 crc kubenswrapper[4775]: I0123 14:07:47.035474 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db514f53-7687-42b7-b6bb-edc7208361d6-client-ca\") pod \"controller-manager-7fc4d79794-zptsb\" (UID: \"db514f53-7687-42b7-b6bb-edc7208361d6\") " pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" Jan 23 14:07:47 crc kubenswrapper[4775]: I0123 14:07:47.035535 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf89t\" (UniqueName: \"kubernetes.io/projected/db514f53-7687-42b7-b6bb-edc7208361d6-kube-api-access-sf89t\") pod \"controller-manager-7fc4d79794-zptsb\" (UID: \"db514f53-7687-42b7-b6bb-edc7208361d6\") " pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" Jan 23 14:07:47 crc kubenswrapper[4775]: I0123 14:07:47.035616 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db514f53-7687-42b7-b6bb-edc7208361d6-serving-cert\") pod \"controller-manager-7fc4d79794-zptsb\" (UID: \"db514f53-7687-42b7-b6bb-edc7208361d6\") " pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" Jan 23 14:07:47 crc kubenswrapper[4775]: E0123 14:07:47.129701 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 23 14:07:47 crc kubenswrapper[4775]: E0123 14:07:47.129882 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f2lfm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-2q2jj_openshift-marketplace(8bb5169a-229e-4d38-beea-4783c11d0098): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 14:07:47 crc kubenswrapper[4775]: E0123 14:07:47.131102 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-2q2jj" podUID="8bb5169a-229e-4d38-beea-4783c11d0098" Jan 23 14:07:47 crc kubenswrapper[4775]: I0123 14:07:47.137418 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db514f53-7687-42b7-b6bb-edc7208361d6-client-ca\") pod \"controller-manager-7fc4d79794-zptsb\" (UID: \"db514f53-7687-42b7-b6bb-edc7208361d6\") " pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" Jan 23 14:07:47 crc kubenswrapper[4775]: I0123 14:07:47.138740 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db514f53-7687-42b7-b6bb-edc7208361d6-client-ca\") pod \"controller-manager-7fc4d79794-zptsb\" (UID: \"db514f53-7687-42b7-b6bb-edc7208361d6\") " pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" Jan 23 14:07:47 crc kubenswrapper[4775]: I0123 14:07:47.138863 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf89t\" (UniqueName: \"kubernetes.io/projected/db514f53-7687-42b7-b6bb-edc7208361d6-kube-api-access-sf89t\") pod \"controller-manager-7fc4d79794-zptsb\" (UID: \"db514f53-7687-42b7-b6bb-edc7208361d6\") " pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" Jan 23 14:07:47 crc kubenswrapper[4775]: I0123 14:07:47.138960 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db514f53-7687-42b7-b6bb-edc7208361d6-serving-cert\") pod \"controller-manager-7fc4d79794-zptsb\" (UID: \"db514f53-7687-42b7-b6bb-edc7208361d6\") " pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" Jan 23 14:07:47 crc kubenswrapper[4775]: I0123 14:07:47.139084 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db514f53-7687-42b7-b6bb-edc7208361d6-config\") pod \"controller-manager-7fc4d79794-zptsb\" (UID: \"db514f53-7687-42b7-b6bb-edc7208361d6\") " pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" Jan 23 14:07:47 crc kubenswrapper[4775]: I0123 14:07:47.139113 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db514f53-7687-42b7-b6bb-edc7208361d6-proxy-ca-bundles\") pod \"controller-manager-7fc4d79794-zptsb\" (UID: \"db514f53-7687-42b7-b6bb-edc7208361d6\") " pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" Jan 23 14:07:47 crc kubenswrapper[4775]: I0123 14:07:47.140346 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db514f53-7687-42b7-b6bb-edc7208361d6-config\") pod \"controller-manager-7fc4d79794-zptsb\" (UID: \"db514f53-7687-42b7-b6bb-edc7208361d6\") " pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" Jan 23 14:07:47 crc kubenswrapper[4775]: I0123 14:07:47.140645 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db514f53-7687-42b7-b6bb-edc7208361d6-proxy-ca-bundles\") pod \"controller-manager-7fc4d79794-zptsb\" (UID: \"db514f53-7687-42b7-b6bb-edc7208361d6\") " pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" Jan 23 14:07:47 crc kubenswrapper[4775]: I0123 14:07:47.147132 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db514f53-7687-42b7-b6bb-edc7208361d6-serving-cert\") pod \"controller-manager-7fc4d79794-zptsb\" (UID: \"db514f53-7687-42b7-b6bb-edc7208361d6\") " pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" Jan 23 14:07:47 crc kubenswrapper[4775]: I0123 14:07:47.166704 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf89t\" (UniqueName: \"kubernetes.io/projected/db514f53-7687-42b7-b6bb-edc7208361d6-kube-api-access-sf89t\") pod \"controller-manager-7fc4d79794-zptsb\" (UID: \"db514f53-7687-42b7-b6bb-edc7208361d6\") " pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" Jan 23 14:07:47 crc kubenswrapper[4775]: I0123 14:07:47.327384 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" Jan 23 14:07:48 crc kubenswrapper[4775]: E0123 14:07:48.943884 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 23 14:07:48 crc kubenswrapper[4775]: E0123 14:07:48.944417 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h2k5h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-84gx7_openshift-marketplace(0e3253a9-fac0-401c-8e02-52758dbc40f3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 14:07:48 crc kubenswrapper[4775]: E0123 14:07:48.945670 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-84gx7" podUID="0e3253a9-fac0-401c-8e02-52758dbc40f3" Jan 23 14:07:50 crc kubenswrapper[4775]: E0123 14:07:50.056221 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-84gx7" podUID="0e3253a9-fac0-401c-8e02-52758dbc40f3" Jan 23 14:07:50 crc kubenswrapper[4775]: E0123 14:07:50.057246 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2q2jj" podUID="8bb5169a-229e-4d38-beea-4783c11d0098" Jan 23 14:07:50 crc kubenswrapper[4775]: E0123 14:07:50.128890 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 23 14:07:50 crc kubenswrapper[4775]: E0123 14:07:50.129526 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vnxtf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-285dn_openshift-marketplace(1b219edd-2ebd-4968-b427-ec555eade68c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 23 14:07:50 crc kubenswrapper[4775]: E0123 14:07:50.130726 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-285dn" podUID="1b219edd-2ebd-4968-b427-ec555eade68c" Jan 23 14:07:50 crc kubenswrapper[4775]: I0123 14:07:50.304189 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7fc4d79794-zptsb"] Jan 23 14:07:50 crc kubenswrapper[4775]: W0123 14:07:50.327577 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb514f53_7687_42b7_b6bb_edc7208361d6.slice/crio-d9fd91d6e90c91180c8d490f7128ec362afa9bb227a9ab898100a9fcd0fc4b47 WatchSource:0}: Error finding container d9fd91d6e90c91180c8d490f7128ec362afa9bb227a9ab898100a9fcd0fc4b47: Status 404 returned error can't find the container with id d9fd91d6e90c91180c8d490f7128ec362afa9bb227a9ab898100a9fcd0fc4b47 Jan 23 14:07:50 crc kubenswrapper[4775]: I0123 14:07:50.395937 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0","Type":"ContainerStarted","Data":"f55aef3075bf3519bde57f36e8c03c9ec9ac3f4b76b1c0fb9bf763560e6b84f4"} Jan 23 14:07:50 crc kubenswrapper[4775]: I0123 14:07:50.397127 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" event={"ID":"db514f53-7687-42b7-b6bb-edc7208361d6","Type":"ContainerStarted","Data":"d9fd91d6e90c91180c8d490f7128ec362afa9bb227a9ab898100a9fcd0fc4b47"} Jan 23 14:07:50 crc kubenswrapper[4775]: I0123 14:07:50.399427 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-47lz2" event={"ID":"63ed1a97-c97e-40d0-afdf-260c475dc83f","Type":"ContainerStarted","Data":"0436249a72238537f3e2c75557b89ddfb8ecc64c7946eccac2a4926110abd43e"} Jan 23 14:07:50 crc kubenswrapper[4775]: E0123 14:07:50.400877 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-285dn" podUID="1b219edd-2ebd-4968-b427-ec555eade68c" Jan 23 14:07:50 crc kubenswrapper[4775]: I0123 14:07:50.616885 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-47lz2" podStartSLOduration=194.616863277 podStartE2EDuration="3m14.616863277s" podCreationTimestamp="2026-01-23 14:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:07:50.431970952 +0000 UTC m=+217.426799702" watchObservedRunningTime="2026-01-23 14:07:50.616863277 +0000 UTC m=+217.611692017" Jan 23 14:07:50 crc kubenswrapper[4775]: I0123 14:07:50.623299 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp"] Jan 23 14:07:50 crc kubenswrapper[4775]: W0123 14:07:50.627994 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod303477e6_d4ac_4cbc_a088_3d7754129bd4.slice/crio-46aebd3620f7b7059c0f06be42afa1d095d92cafdab916c70358b05e83c2baba WatchSource:0}: Error finding container 46aebd3620f7b7059c0f06be42afa1d095d92cafdab916c70358b05e83c2baba: Status 404 returned error can't find the container with id 46aebd3620f7b7059c0f06be42afa1d095d92cafdab916c70358b05e83c2baba Jan 23 14:07:51 crc kubenswrapper[4775]: I0123 14:07:51.408393 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" event={"ID":"db514f53-7687-42b7-b6bb-edc7208361d6","Type":"ContainerStarted","Data":"6749598a5345ffb0fda60f9291093153566d9479b12238d34684f41edb3fc062"} Jan 23 14:07:51 crc kubenswrapper[4775]: I0123 14:07:51.408763 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" Jan 23 14:07:51 crc kubenswrapper[4775]: I0123 14:07:51.410281 4775 generic.go:334] "Generic (PLEG): container finished" podID="df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0" containerID="8313d85e6f6770dd871c5a84a51890ea2ea183eff258a22019919e03772f0b12" exitCode=0 Jan 23 14:07:51 crc kubenswrapper[4775]: I0123 14:07:51.410348 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0","Type":"ContainerDied","Data":"8313d85e6f6770dd871c5a84a51890ea2ea183eff258a22019919e03772f0b12"} Jan 23 14:07:51 crc kubenswrapper[4775]: I0123 14:07:51.412006 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" Jan 23 14:07:51 crc kubenswrapper[4775]: I0123 14:07:51.417132 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b0d34b3f-ebda-4e48-82ec-36db9214c42a","Type":"ContainerStarted","Data":"0c28974bf5aa3d2045f7f01151a0a690db3102172d533985bc3f349a477cc135"} Jan 23 14:07:51 crc kubenswrapper[4775]: I0123 14:07:51.421387 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pphm8" event={"ID":"1a627ae2-fe8d-403e-9d14-3c3ace588da5","Type":"ContainerStarted","Data":"2f1f5c6dce1daa303e2331c24327c21bb8a394fe4879f5fa44bbe92a333ebdca"} Jan 23 14:07:51 crc kubenswrapper[4775]: I0123 14:07:51.428900 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" event={"ID":"303477e6-d4ac-4cbc-a088-3d7754129bd4","Type":"ContainerStarted","Data":"75677b9b3bc9dd548b6b712ffb579a2023be7d4e1472e7d29a9986a72dbb56cd"} Jan 23 14:07:51 crc kubenswrapper[4775]: I0123 14:07:51.428979 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" event={"ID":"303477e6-d4ac-4cbc-a088-3d7754129bd4","Type":"ContainerStarted","Data":"46aebd3620f7b7059c0f06be42afa1d095d92cafdab916c70358b05e83c2baba"} Jan 23 14:07:51 crc kubenswrapper[4775]: I0123 14:07:51.429007 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" Jan 23 14:07:51 crc kubenswrapper[4775]: I0123 14:07:51.437880 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" Jan 23 14:07:51 crc kubenswrapper[4775]: I0123 14:07:51.442653 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" podStartSLOduration=25.44263853 podStartE2EDuration="25.44263853s" podCreationTimestamp="2026-01-23 14:07:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:07:51.427172927 +0000 UTC m=+218.422001667" watchObservedRunningTime="2026-01-23 14:07:51.44263853 +0000 UTC m=+218.437467270" Jan 23 14:07:51 crc kubenswrapper[4775]: I0123 14:07:51.470330 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=17.470315294 podStartE2EDuration="17.470315294s" podCreationTimestamp="2026-01-23 14:07:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:07:51.469463017 +0000 UTC m=+218.464291797" watchObservedRunningTime="2026-01-23 14:07:51.470315294 +0000 UTC m=+218.465144034" Jan 23 14:07:51 crc kubenswrapper[4775]: I0123 14:07:51.502216 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" podStartSLOduration=25.50219874 podStartE2EDuration="25.50219874s" podCreationTimestamp="2026-01-23 14:07:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:07:51.500184527 +0000 UTC m=+218.495013287" watchObservedRunningTime="2026-01-23 14:07:51.50219874 +0000 UTC m=+218.497027480" Jan 23 14:07:52 crc kubenswrapper[4775]: I0123 14:07:52.434497 4775 generic.go:334] "Generic (PLEG): container finished" podID="1a627ae2-fe8d-403e-9d14-3c3ace588da5" containerID="2f1f5c6dce1daa303e2331c24327c21bb8a394fe4879f5fa44bbe92a333ebdca" exitCode=0 Jan 23 14:07:52 crc kubenswrapper[4775]: I0123 14:07:52.434570 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pphm8" event={"ID":"1a627ae2-fe8d-403e-9d14-3c3ace588da5","Type":"ContainerDied","Data":"2f1f5c6dce1daa303e2331c24327c21bb8a394fe4879f5fa44bbe92a333ebdca"} Jan 23 14:07:52 crc kubenswrapper[4775]: I0123 14:07:52.435730 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pphm8" event={"ID":"1a627ae2-fe8d-403e-9d14-3c3ace588da5","Type":"ContainerStarted","Data":"52d85f8b19526e62a15c2bbebc40ff3a5e40cac38ce5567549cca65b58a04c73"} Jan 23 14:07:52 crc kubenswrapper[4775]: I0123 14:07:52.438310 4775 generic.go:334] "Generic (PLEG): container finished" podID="e59d5724-424f-4151-98a4-c2cfa3918ac0" containerID="cfd053c22baaf71bc6e6f5aaf2077bc268a3849c132a7cf71ad6b25d80b48bc6" exitCode=0 Jan 23 14:07:52 crc kubenswrapper[4775]: I0123 14:07:52.438371 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q6l68" event={"ID":"e59d5724-424f-4151-98a4-c2cfa3918ac0","Type":"ContainerDied","Data":"cfd053c22baaf71bc6e6f5aaf2077bc268a3849c132a7cf71ad6b25d80b48bc6"} Jan 23 14:07:52 crc kubenswrapper[4775]: I0123 14:07:52.459307 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pphm8" podStartSLOduration=2.054023057 podStartE2EDuration="1m4.459288375s" podCreationTimestamp="2026-01-23 14:06:48 +0000 UTC" firstStartedPulling="2026-01-23 14:06:49.620044681 +0000 UTC m=+156.614873422" lastFinishedPulling="2026-01-23 14:07:52.02531 +0000 UTC m=+219.020138740" observedRunningTime="2026-01-23 14:07:52.456286001 +0000 UTC m=+219.451114741" watchObservedRunningTime="2026-01-23 14:07:52.459288375 +0000 UTC m=+219.454117115" Jan 23 14:07:52 crc kubenswrapper[4775]: I0123 14:07:52.717617 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 14:07:52 crc kubenswrapper[4775]: I0123 14:07:52.826258 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0-kube-api-access\") pod \"df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0\" (UID: \"df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0\") " Jan 23 14:07:52 crc kubenswrapper[4775]: I0123 14:07:52.826303 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0-kubelet-dir\") pod \"df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0\" (UID: \"df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0\") " Jan 23 14:07:52 crc kubenswrapper[4775]: I0123 14:07:52.826484 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0" (UID: "df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:07:52 crc kubenswrapper[4775]: I0123 14:07:52.827008 4775 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 14:07:52 crc kubenswrapper[4775]: I0123 14:07:52.833343 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0" (UID: "df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:07:52 crc kubenswrapper[4775]: I0123 14:07:52.928485 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 14:07:53 crc kubenswrapper[4775]: I0123 14:07:53.219164 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:07:53 crc kubenswrapper[4775]: I0123 14:07:53.219229 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:07:53 crc kubenswrapper[4775]: I0123 14:07:53.219273 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:07:53 crc kubenswrapper[4775]: I0123 14:07:53.219851 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d"} pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 14:07:53 crc kubenswrapper[4775]: I0123 14:07:53.219954 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" containerID="cri-o://69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d" gracePeriod=600 Jan 23 14:07:53 crc kubenswrapper[4775]: I0123 14:07:53.445633 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 23 14:07:53 crc kubenswrapper[4775]: I0123 14:07:53.445657 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0","Type":"ContainerDied","Data":"f55aef3075bf3519bde57f36e8c03c9ec9ac3f4b76b1c0fb9bf763560e6b84f4"} Jan 23 14:07:53 crc kubenswrapper[4775]: I0123 14:07:53.446136 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f55aef3075bf3519bde57f36e8c03c9ec9ac3f4b76b1c0fb9bf763560e6b84f4" Jan 23 14:07:53 crc kubenswrapper[4775]: I0123 14:07:53.447908 4775 generic.go:334] "Generic (PLEG): container finished" podID="4fea0767-0566-4214-855d-ed0373946271" containerID="69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d" exitCode=0 Jan 23 14:07:53 crc kubenswrapper[4775]: I0123 14:07:53.447991 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerDied","Data":"69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d"} Jan 23 14:07:53 crc kubenswrapper[4775]: I0123 14:07:53.450722 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q6l68" event={"ID":"e59d5724-424f-4151-98a4-c2cfa3918ac0","Type":"ContainerStarted","Data":"706b207c906b11477ffafcc96a740d5e3fd0c32011317bda62a73b4005aa1b8f"} Jan 23 14:07:53 crc kubenswrapper[4775]: I0123 14:07:53.453366 4775 generic.go:334] "Generic (PLEG): container finished" podID="a25e2625-85e2-4f61-a654-347c5d111fc2" containerID="6e7e07e4a43f64752c0a8abac539b9d82b36fb5b5bf92042844ccd65a180b0bd" exitCode=0 Jan 23 14:07:53 crc kubenswrapper[4775]: I0123 14:07:53.455148 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-998gd" event={"ID":"a25e2625-85e2-4f61-a654-347c5d111fc2","Type":"ContainerDied","Data":"6e7e07e4a43f64752c0a8abac539b9d82b36fb5b5bf92042844ccd65a180b0bd"} Jan 23 14:07:53 crc kubenswrapper[4775]: I0123 14:07:53.480909 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q6l68" podStartSLOduration=2.370418032 podStartE2EDuration="1m3.480882785s" podCreationTimestamp="2026-01-23 14:06:50 +0000 UTC" firstStartedPulling="2026-01-23 14:06:51.805542788 +0000 UTC m=+158.800371528" lastFinishedPulling="2026-01-23 14:07:52.916007541 +0000 UTC m=+219.910836281" observedRunningTime="2026-01-23 14:07:53.472144282 +0000 UTC m=+220.466973032" watchObservedRunningTime="2026-01-23 14:07:53.480882785 +0000 UTC m=+220.475711535" Jan 23 14:07:54 crc kubenswrapper[4775]: I0123 14:07:54.460199 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerStarted","Data":"64681a72387a3235a4c6d3370b32de4e57c80d8102b47cdde5e10511ccb7381b"} Jan 23 14:07:54 crc kubenswrapper[4775]: I0123 14:07:54.461817 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-998gd" event={"ID":"a25e2625-85e2-4f61-a654-347c5d111fc2","Type":"ContainerStarted","Data":"b7027212b3bccd48b41a4c7b4324ffd0070d6284de5b7bb9bd87ab4379a0817e"} Jan 23 14:07:55 crc kubenswrapper[4775]: I0123 14:07:55.501494 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-998gd" podStartSLOduration=4.395167005 podStartE2EDuration="1m5.501467678s" podCreationTimestamp="2026-01-23 14:06:50 +0000 UTC" firstStartedPulling="2026-01-23 14:06:52.833262909 +0000 UTC m=+159.828091649" lastFinishedPulling="2026-01-23 14:07:53.939563582 +0000 UTC m=+220.934392322" observedRunningTime="2026-01-23 14:07:55.495838542 +0000 UTC m=+222.490667292" watchObservedRunningTime="2026-01-23 14:07:55.501467678 +0000 UTC m=+222.496296458" Jan 23 14:07:58 crc kubenswrapper[4775]: I0123 14:07:58.636790 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pphm8" Jan 23 14:07:58 crc kubenswrapper[4775]: I0123 14:07:58.637578 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pphm8" Jan 23 14:08:00 crc kubenswrapper[4775]: I0123 14:08:00.242420 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pphm8" Jan 23 14:08:00 crc kubenswrapper[4775]: I0123 14:08:00.308025 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pphm8" Jan 23 14:08:00 crc kubenswrapper[4775]: I0123 14:08:00.450538 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q6l68" Jan 23 14:08:00 crc kubenswrapper[4775]: I0123 14:08:00.450599 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q6l68" Jan 23 14:08:00 crc kubenswrapper[4775]: I0123 14:08:00.487429 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q6l68" Jan 23 14:08:00 crc kubenswrapper[4775]: I0123 14:08:00.519936 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-stflq" event={"ID":"9f29362d-380a-46e7-b163-0ff42600d563","Type":"ContainerStarted","Data":"183673291a9648779d425ebe1de476acbe41025abfa9eb2361ef3769370abcf7"} Jan 23 14:08:00 crc kubenswrapper[4775]: I0123 14:08:00.522516 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hdhzj" event={"ID":"945aeb53-25e2-4666-8fbe-a12be2948454","Type":"ContainerStarted","Data":"2e0a1b0a4d9848670d528c2dac734ab723eb0475190c6d5a98e31225e9651f6d"} Jan 23 14:08:00 crc kubenswrapper[4775]: I0123 14:08:00.565501 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q6l68" Jan 23 14:08:00 crc kubenswrapper[4775]: I0123 14:08:00.617416 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pphm8"] Jan 23 14:08:00 crc kubenswrapper[4775]: I0123 14:08:00.846544 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-998gd" Jan 23 14:08:00 crc kubenswrapper[4775]: I0123 14:08:00.846581 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-998gd" Jan 23 14:08:00 crc kubenswrapper[4775]: I0123 14:08:00.900905 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-998gd" Jan 23 14:08:01 crc kubenswrapper[4775]: I0123 14:08:01.529259 4775 generic.go:334] "Generic (PLEG): container finished" podID="9f29362d-380a-46e7-b163-0ff42600d563" containerID="183673291a9648779d425ebe1de476acbe41025abfa9eb2361ef3769370abcf7" exitCode=0 Jan 23 14:08:01 crc kubenswrapper[4775]: I0123 14:08:01.529456 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-stflq" event={"ID":"9f29362d-380a-46e7-b163-0ff42600d563","Type":"ContainerDied","Data":"183673291a9648779d425ebe1de476acbe41025abfa9eb2361ef3769370abcf7"} Jan 23 14:08:01 crc kubenswrapper[4775]: I0123 14:08:01.533085 4775 generic.go:334] "Generic (PLEG): container finished" podID="945aeb53-25e2-4666-8fbe-a12be2948454" containerID="2e0a1b0a4d9848670d528c2dac734ab723eb0475190c6d5a98e31225e9651f6d" exitCode=0 Jan 23 14:08:01 crc kubenswrapper[4775]: I0123 14:08:01.534246 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hdhzj" event={"ID":"945aeb53-25e2-4666-8fbe-a12be2948454","Type":"ContainerDied","Data":"2e0a1b0a4d9848670d528c2dac734ab723eb0475190c6d5a98e31225e9651f6d"} Jan 23 14:08:01 crc kubenswrapper[4775]: I0123 14:08:01.535377 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pphm8" podUID="1a627ae2-fe8d-403e-9d14-3c3ace588da5" containerName="registry-server" containerID="cri-o://52d85f8b19526e62a15c2bbebc40ff3a5e40cac38ce5567549cca65b58a04c73" gracePeriod=2 Jan 23 14:08:01 crc kubenswrapper[4775]: I0123 14:08:01.962355 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-998gd" Jan 23 14:08:02 crc kubenswrapper[4775]: I0123 14:08:02.546205 4775 generic.go:334] "Generic (PLEG): container finished" podID="1a627ae2-fe8d-403e-9d14-3c3ace588da5" containerID="52d85f8b19526e62a15c2bbebc40ff3a5e40cac38ce5567549cca65b58a04c73" exitCode=0 Jan 23 14:08:02 crc kubenswrapper[4775]: I0123 14:08:02.546351 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pphm8" event={"ID":"1a627ae2-fe8d-403e-9d14-3c3ace588da5","Type":"ContainerDied","Data":"52d85f8b19526e62a15c2bbebc40ff3a5e40cac38ce5567549cca65b58a04c73"} Jan 23 14:08:02 crc kubenswrapper[4775]: I0123 14:08:02.820063 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-998gd"] Jan 23 14:08:03 crc kubenswrapper[4775]: I0123 14:08:03.554636 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-998gd" podUID="a25e2625-85e2-4f61-a654-347c5d111fc2" containerName="registry-server" containerID="cri-o://b7027212b3bccd48b41a4c7b4324ffd0070d6284de5b7bb9bd87ab4379a0817e" gracePeriod=2 Jan 23 14:08:03 crc kubenswrapper[4775]: I0123 14:08:03.859029 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pphm8" Jan 23 14:08:03 crc kubenswrapper[4775]: I0123 14:08:03.985786 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a627ae2-fe8d-403e-9d14-3c3ace588da5-utilities\") pod \"1a627ae2-fe8d-403e-9d14-3c3ace588da5\" (UID: \"1a627ae2-fe8d-403e-9d14-3c3ace588da5\") " Jan 23 14:08:03 crc kubenswrapper[4775]: I0123 14:08:03.986037 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a627ae2-fe8d-403e-9d14-3c3ace588da5-catalog-content\") pod \"1a627ae2-fe8d-403e-9d14-3c3ace588da5\" (UID: \"1a627ae2-fe8d-403e-9d14-3c3ace588da5\") " Jan 23 14:08:03 crc kubenswrapper[4775]: I0123 14:08:03.986173 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhknv\" (UniqueName: \"kubernetes.io/projected/1a627ae2-fe8d-403e-9d14-3c3ace588da5-kube-api-access-dhknv\") pod \"1a627ae2-fe8d-403e-9d14-3c3ace588da5\" (UID: \"1a627ae2-fe8d-403e-9d14-3c3ace588da5\") " Jan 23 14:08:03 crc kubenswrapper[4775]: I0123 14:08:03.986664 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a627ae2-fe8d-403e-9d14-3c3ace588da5-utilities" (OuterVolumeSpecName: "utilities") pod "1a627ae2-fe8d-403e-9d14-3c3ace588da5" (UID: "1a627ae2-fe8d-403e-9d14-3c3ace588da5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:08:03 crc kubenswrapper[4775]: I0123 14:08:03.986814 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a627ae2-fe8d-403e-9d14-3c3ace588da5-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:03 crc kubenswrapper[4775]: I0123 14:08:03.995074 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a627ae2-fe8d-403e-9d14-3c3ace588da5-kube-api-access-dhknv" (OuterVolumeSpecName: "kube-api-access-dhknv") pod "1a627ae2-fe8d-403e-9d14-3c3ace588da5" (UID: "1a627ae2-fe8d-403e-9d14-3c3ace588da5"). InnerVolumeSpecName "kube-api-access-dhknv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:08:04 crc kubenswrapper[4775]: I0123 14:08:04.087861 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhknv\" (UniqueName: \"kubernetes.io/projected/1a627ae2-fe8d-403e-9d14-3c3ace588da5-kube-api-access-dhknv\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:04 crc kubenswrapper[4775]: I0123 14:08:04.211044 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a627ae2-fe8d-403e-9d14-3c3ace588da5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1a627ae2-fe8d-403e-9d14-3c3ace588da5" (UID: "1a627ae2-fe8d-403e-9d14-3c3ace588da5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:08:04 crc kubenswrapper[4775]: I0123 14:08:04.290317 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a627ae2-fe8d-403e-9d14-3c3ace588da5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:04 crc kubenswrapper[4775]: I0123 14:08:04.561496 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pphm8" event={"ID":"1a627ae2-fe8d-403e-9d14-3c3ace588da5","Type":"ContainerDied","Data":"0b453500d83d6bbbd03aaa519b618891a6bceb9a87ed025821643578d93cd618"} Jan 23 14:08:04 crc kubenswrapper[4775]: I0123 14:08:04.561515 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pphm8" Jan 23 14:08:04 crc kubenswrapper[4775]: I0123 14:08:04.561837 4775 scope.go:117] "RemoveContainer" containerID="52d85f8b19526e62a15c2bbebc40ff3a5e40cac38ce5567549cca65b58a04c73" Jan 23 14:08:04 crc kubenswrapper[4775]: I0123 14:08:04.570138 4775 generic.go:334] "Generic (PLEG): container finished" podID="a25e2625-85e2-4f61-a654-347c5d111fc2" containerID="b7027212b3bccd48b41a4c7b4324ffd0070d6284de5b7bb9bd87ab4379a0817e" exitCode=0 Jan 23 14:08:04 crc kubenswrapper[4775]: I0123 14:08:04.570232 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-998gd" event={"ID":"a25e2625-85e2-4f61-a654-347c5d111fc2","Type":"ContainerDied","Data":"b7027212b3bccd48b41a4c7b4324ffd0070d6284de5b7bb9bd87ab4379a0817e"} Jan 23 14:08:04 crc kubenswrapper[4775]: I0123 14:08:04.579327 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hdhzj" event={"ID":"945aeb53-25e2-4666-8fbe-a12be2948454","Type":"ContainerStarted","Data":"cc1b22943c56dbb624adaa13d3deaf2266f850e92f931c164c7c7ecc34724e35"} Jan 23 14:08:04 crc kubenswrapper[4775]: I0123 14:08:04.582641 4775 scope.go:117] "RemoveContainer" containerID="2f1f5c6dce1daa303e2331c24327c21bb8a394fe4879f5fa44bbe92a333ebdca" Jan 23 14:08:04 crc kubenswrapper[4775]: I0123 14:08:04.590821 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pphm8"] Jan 23 14:08:04 crc kubenswrapper[4775]: I0123 14:08:04.592642 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pphm8"] Jan 23 14:08:04 crc kubenswrapper[4775]: I0123 14:08:04.613554 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hdhzj" podStartSLOduration=3.314178208 podStartE2EDuration="1m16.613535306s" podCreationTimestamp="2026-01-23 14:06:48 +0000 UTC" firstStartedPulling="2026-01-23 14:06:50.676337505 +0000 UTC m=+157.671166255" lastFinishedPulling="2026-01-23 14:08:03.975694613 +0000 UTC m=+230.970523353" observedRunningTime="2026-01-23 14:08:04.612348709 +0000 UTC m=+231.607177479" watchObservedRunningTime="2026-01-23 14:08:04.613535306 +0000 UTC m=+231.608364046" Jan 23 14:08:04 crc kubenswrapper[4775]: I0123 14:08:04.622180 4775 scope.go:117] "RemoveContainer" containerID="cbed6950aa3965cd8bfc7aa378027bf0a2d1e04ccbea9bb4f1e5636ae166f729" Jan 23 14:08:04 crc kubenswrapper[4775]: I0123 14:08:04.803235 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-998gd" Jan 23 14:08:05 crc kubenswrapper[4775]: I0123 14:08:05.000005 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a25e2625-85e2-4f61-a654-347c5d111fc2-utilities\") pod \"a25e2625-85e2-4f61-a654-347c5d111fc2\" (UID: \"a25e2625-85e2-4f61-a654-347c5d111fc2\") " Jan 23 14:08:05 crc kubenswrapper[4775]: I0123 14:08:05.000570 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mds26\" (UniqueName: \"kubernetes.io/projected/a25e2625-85e2-4f61-a654-347c5d111fc2-kube-api-access-mds26\") pod \"a25e2625-85e2-4f61-a654-347c5d111fc2\" (UID: \"a25e2625-85e2-4f61-a654-347c5d111fc2\") " Jan 23 14:08:05 crc kubenswrapper[4775]: I0123 14:08:05.000706 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a25e2625-85e2-4f61-a654-347c5d111fc2-catalog-content\") pod \"a25e2625-85e2-4f61-a654-347c5d111fc2\" (UID: \"a25e2625-85e2-4f61-a654-347c5d111fc2\") " Jan 23 14:08:05 crc kubenswrapper[4775]: I0123 14:08:05.003186 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a25e2625-85e2-4f61-a654-347c5d111fc2-utilities" (OuterVolumeSpecName: "utilities") pod "a25e2625-85e2-4f61-a654-347c5d111fc2" (UID: "a25e2625-85e2-4f61-a654-347c5d111fc2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:08:05 crc kubenswrapper[4775]: I0123 14:08:05.016455 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a25e2625-85e2-4f61-a654-347c5d111fc2-kube-api-access-mds26" (OuterVolumeSpecName: "kube-api-access-mds26") pod "a25e2625-85e2-4f61-a654-347c5d111fc2" (UID: "a25e2625-85e2-4f61-a654-347c5d111fc2"). InnerVolumeSpecName "kube-api-access-mds26". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:08:05 crc kubenswrapper[4775]: I0123 14:08:05.031410 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a25e2625-85e2-4f61-a654-347c5d111fc2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a25e2625-85e2-4f61-a654-347c5d111fc2" (UID: "a25e2625-85e2-4f61-a654-347c5d111fc2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:08:05 crc kubenswrapper[4775]: I0123 14:08:05.101699 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a25e2625-85e2-4f61-a654-347c5d111fc2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:05 crc kubenswrapper[4775]: I0123 14:08:05.101745 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a25e2625-85e2-4f61-a654-347c5d111fc2-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:05 crc kubenswrapper[4775]: I0123 14:08:05.101764 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mds26\" (UniqueName: \"kubernetes.io/projected/a25e2625-85e2-4f61-a654-347c5d111fc2-kube-api-access-mds26\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:05 crc kubenswrapper[4775]: I0123 14:08:05.586817 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-998gd" event={"ID":"a25e2625-85e2-4f61-a654-347c5d111fc2","Type":"ContainerDied","Data":"3ac2cbde2ce107b51f2fd46e9adae179e9362f5a9c3e49977d3cabfab8d5c7a8"} Jan 23 14:08:05 crc kubenswrapper[4775]: I0123 14:08:05.587090 4775 scope.go:117] "RemoveContainer" containerID="b7027212b3bccd48b41a4c7b4324ffd0070d6284de5b7bb9bd87ab4379a0817e" Jan 23 14:08:05 crc kubenswrapper[4775]: I0123 14:08:05.587208 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-998gd" Jan 23 14:08:05 crc kubenswrapper[4775]: I0123 14:08:05.595326 4775 generic.go:334] "Generic (PLEG): container finished" podID="8bb5169a-229e-4d38-beea-4783c11d0098" containerID="e563f1706af6b75f9ac6731329cafb2b41d302473241046df0512766a2019809" exitCode=0 Jan 23 14:08:05 crc kubenswrapper[4775]: I0123 14:08:05.595397 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2q2jj" event={"ID":"8bb5169a-229e-4d38-beea-4783c11d0098","Type":"ContainerDied","Data":"e563f1706af6b75f9ac6731329cafb2b41d302473241046df0512766a2019809"} Jan 23 14:08:05 crc kubenswrapper[4775]: I0123 14:08:05.600977 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-stflq" event={"ID":"9f29362d-380a-46e7-b163-0ff42600d563","Type":"ContainerStarted","Data":"cf3fc96af9965d666fc5525bdd18e99c724ac634a1b40cc9d717fc2172e97742"} Jan 23 14:08:05 crc kubenswrapper[4775]: I0123 14:08:05.614444 4775 scope.go:117] "RemoveContainer" containerID="6e7e07e4a43f64752c0a8abac539b9d82b36fb5b5bf92042844ccd65a180b0bd" Jan 23 14:08:05 crc kubenswrapper[4775]: I0123 14:08:05.632564 4775 scope.go:117] "RemoveContainer" containerID="ca983591e9c5773d2d910396e97f6529e836009e39c2ca638887beada7a160d7" Jan 23 14:08:05 crc kubenswrapper[4775]: I0123 14:08:05.642194 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-998gd"] Jan 23 14:08:05 crc kubenswrapper[4775]: I0123 14:08:05.646691 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-998gd"] Jan 23 14:08:05 crc kubenswrapper[4775]: I0123 14:08:05.726542 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a627ae2-fe8d-403e-9d14-3c3ace588da5" path="/var/lib/kubelet/pods/1a627ae2-fe8d-403e-9d14-3c3ace588da5/volumes" Jan 23 14:08:05 crc kubenswrapper[4775]: I0123 14:08:05.727331 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a25e2625-85e2-4f61-a654-347c5d111fc2" path="/var/lib/kubelet/pods/a25e2625-85e2-4f61-a654-347c5d111fc2/volumes" Jan 23 14:08:06 crc kubenswrapper[4775]: I0123 14:08:06.608368 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2q2jj" event={"ID":"8bb5169a-229e-4d38-beea-4783c11d0098","Type":"ContainerStarted","Data":"c7260cd3d625fa792d5d94bcaae087826a69b9166dd1b6258fd35d2e1bd77b66"} Jan 23 14:08:06 crc kubenswrapper[4775]: I0123 14:08:06.612105 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-84gx7" event={"ID":"0e3253a9-fac0-401c-8e02-52758dbc40f3","Type":"ContainerStarted","Data":"5650f2902470285f87f0519671b820000e9540073b92320e14586d65634addb8"} Jan 23 14:08:06 crc kubenswrapper[4775]: I0123 14:08:06.614350 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-285dn" event={"ID":"1b219edd-2ebd-4968-b427-ec555eade68c","Type":"ContainerStarted","Data":"b1229993babbc54c28d7f94650301e60c409ed8c65f3e43af5dfec3a30554ce5"} Jan 23 14:08:06 crc kubenswrapper[4775]: I0123 14:08:06.628580 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2q2jj" podStartSLOduration=2.842502438 podStartE2EDuration="1m19.628554986s" podCreationTimestamp="2026-01-23 14:06:47 +0000 UTC" firstStartedPulling="2026-01-23 14:06:49.636568005 +0000 UTC m=+156.631396745" lastFinishedPulling="2026-01-23 14:08:06.422620563 +0000 UTC m=+233.417449293" observedRunningTime="2026-01-23 14:08:06.626893684 +0000 UTC m=+233.621722424" watchObservedRunningTime="2026-01-23 14:08:06.628554986 +0000 UTC m=+233.623383726" Jan 23 14:08:06 crc kubenswrapper[4775]: I0123 14:08:06.629930 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-stflq" podStartSLOduration=5.057721722 podStartE2EDuration="1m15.629922938s" podCreationTimestamp="2026-01-23 14:06:51 +0000 UTC" firstStartedPulling="2026-01-23 14:06:53.855046152 +0000 UTC m=+160.849874892" lastFinishedPulling="2026-01-23 14:08:04.427247368 +0000 UTC m=+231.422076108" observedRunningTime="2026-01-23 14:08:05.65705204 +0000 UTC m=+232.651880790" watchObservedRunningTime="2026-01-23 14:08:06.629922938 +0000 UTC m=+233.624751678" Jan 23 14:08:07 crc kubenswrapper[4775]: I0123 14:08:07.632209 4775 generic.go:334] "Generic (PLEG): container finished" podID="0e3253a9-fac0-401c-8e02-52758dbc40f3" containerID="5650f2902470285f87f0519671b820000e9540073b92320e14586d65634addb8" exitCode=0 Jan 23 14:08:07 crc kubenswrapper[4775]: I0123 14:08:07.632247 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-84gx7" event={"ID":"0e3253a9-fac0-401c-8e02-52758dbc40f3","Type":"ContainerDied","Data":"5650f2902470285f87f0519671b820000e9540073b92320e14586d65634addb8"} Jan 23 14:08:07 crc kubenswrapper[4775]: I0123 14:08:07.635709 4775 generic.go:334] "Generic (PLEG): container finished" podID="1b219edd-2ebd-4968-b427-ec555eade68c" containerID="b1229993babbc54c28d7f94650301e60c409ed8c65f3e43af5dfec3a30554ce5" exitCode=0 Jan 23 14:08:07 crc kubenswrapper[4775]: I0123 14:08:07.635744 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-285dn" event={"ID":"1b219edd-2ebd-4968-b427-ec555eade68c","Type":"ContainerDied","Data":"b1229993babbc54c28d7f94650301e60c409ed8c65f3e43af5dfec3a30554ce5"} Jan 23 14:08:08 crc kubenswrapper[4775]: I0123 14:08:08.269422 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2q2jj" Jan 23 14:08:08 crc kubenswrapper[4775]: I0123 14:08:08.269820 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2q2jj" Jan 23 14:08:08 crc kubenswrapper[4775]: I0123 14:08:08.643609 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-84gx7" event={"ID":"0e3253a9-fac0-401c-8e02-52758dbc40f3","Type":"ContainerStarted","Data":"d42ef899e57f6183a5f1a3a8ba0663646429d61c6d74c35df738852826152a1c"} Jan 23 14:08:08 crc kubenswrapper[4775]: I0123 14:08:08.647378 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-285dn" event={"ID":"1b219edd-2ebd-4968-b427-ec555eade68c","Type":"ContainerStarted","Data":"5d5b3239c4354bbf8668793adb57fca35d10a6d969fbc9bd29c2463925617ab2"} Jan 23 14:08:08 crc kubenswrapper[4775]: I0123 14:08:08.677331 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-84gx7" podStartSLOduration=2.37447291 podStartE2EDuration="1m17.67731126s" podCreationTimestamp="2026-01-23 14:06:51 +0000 UTC" firstStartedPulling="2026-01-23 14:06:52.836708282 +0000 UTC m=+159.831537022" lastFinishedPulling="2026-01-23 14:08:08.139546612 +0000 UTC m=+235.134375372" observedRunningTime="2026-01-23 14:08:08.67699112 +0000 UTC m=+235.671819880" watchObservedRunningTime="2026-01-23 14:08:08.67731126 +0000 UTC m=+235.672140000" Jan 23 14:08:08 crc kubenswrapper[4775]: I0123 14:08:08.704268 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-285dn" podStartSLOduration=3.323133908 podStartE2EDuration="1m20.704248851s" podCreationTimestamp="2026-01-23 14:06:48 +0000 UTC" firstStartedPulling="2026-01-23 14:06:50.683962443 +0000 UTC m=+157.678791183" lastFinishedPulling="2026-01-23 14:08:08.065077386 +0000 UTC m=+235.059906126" observedRunningTime="2026-01-23 14:08:08.70261424 +0000 UTC m=+235.697442980" watchObservedRunningTime="2026-01-23 14:08:08.704248851 +0000 UTC m=+235.699077591" Jan 23 14:08:08 crc kubenswrapper[4775]: I0123 14:08:08.780830 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4q8mj"] Jan 23 14:08:09 crc kubenswrapper[4775]: I0123 14:08:09.310976 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-2q2jj" podUID="8bb5169a-229e-4d38-beea-4783c11d0098" containerName="registry-server" probeResult="failure" output=< Jan 23 14:08:09 crc kubenswrapper[4775]: timeout: failed to connect service ":50051" within 1s Jan 23 14:08:09 crc kubenswrapper[4775]: > Jan 23 14:08:09 crc kubenswrapper[4775]: I0123 14:08:09.483205 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-285dn" Jan 23 14:08:09 crc kubenswrapper[4775]: I0123 14:08:09.483335 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-285dn" Jan 23 14:08:09 crc kubenswrapper[4775]: I0123 14:08:09.548220 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hdhzj" Jan 23 14:08:09 crc kubenswrapper[4775]: I0123 14:08:09.548586 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hdhzj" Jan 23 14:08:09 crc kubenswrapper[4775]: I0123 14:08:09.590398 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hdhzj" Jan 23 14:08:09 crc kubenswrapper[4775]: I0123 14:08:09.690018 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hdhzj" Jan 23 14:08:10 crc kubenswrapper[4775]: I0123 14:08:10.539997 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-285dn" podUID="1b219edd-2ebd-4968-b427-ec555eade68c" containerName="registry-server" probeResult="failure" output=< Jan 23 14:08:10 crc kubenswrapper[4775]: timeout: failed to connect service ":50051" within 1s Jan 23 14:08:10 crc kubenswrapper[4775]: > Jan 23 14:08:11 crc kubenswrapper[4775]: I0123 14:08:11.503848 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-84gx7" Jan 23 14:08:11 crc kubenswrapper[4775]: I0123 14:08:11.503893 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-84gx7" Jan 23 14:08:11 crc kubenswrapper[4775]: I0123 14:08:11.895432 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-stflq" Jan 23 14:08:11 crc kubenswrapper[4775]: I0123 14:08:11.895485 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-stflq" Jan 23 14:08:11 crc kubenswrapper[4775]: I0123 14:08:11.938186 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-stflq" Jan 23 14:08:12 crc kubenswrapper[4775]: I0123 14:08:12.547694 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-84gx7" podUID="0e3253a9-fac0-401c-8e02-52758dbc40f3" containerName="registry-server" probeResult="failure" output=< Jan 23 14:08:12 crc kubenswrapper[4775]: timeout: failed to connect service ":50051" within 1s Jan 23 14:08:12 crc kubenswrapper[4775]: > Jan 23 14:08:12 crc kubenswrapper[4775]: I0123 14:08:12.713910 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-stflq" Jan 23 14:08:13 crc kubenswrapper[4775]: I0123 14:08:13.215139 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hdhzj"] Jan 23 14:08:13 crc kubenswrapper[4775]: I0123 14:08:13.215682 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hdhzj" podUID="945aeb53-25e2-4666-8fbe-a12be2948454" containerName="registry-server" containerID="cri-o://cc1b22943c56dbb624adaa13d3deaf2266f850e92f931c164c7c7ecc34724e35" gracePeriod=2 Jan 23 14:08:15 crc kubenswrapper[4775]: I0123 14:08:15.012629 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-stflq"] Jan 23 14:08:15 crc kubenswrapper[4775]: I0123 14:08:15.012858 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-stflq" podUID="9f29362d-380a-46e7-b163-0ff42600d563" containerName="registry-server" containerID="cri-o://cf3fc96af9965d666fc5525bdd18e99c724ac634a1b40cc9d717fc2172e97742" gracePeriod=2 Jan 23 14:08:17 crc kubenswrapper[4775]: I0123 14:08:17.716673 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-stflq_9f29362d-380a-46e7-b163-0ff42600d563/registry-server/0.log" Jan 23 14:08:17 crc kubenswrapper[4775]: I0123 14:08:17.717994 4775 generic.go:334] "Generic (PLEG): container finished" podID="9f29362d-380a-46e7-b163-0ff42600d563" containerID="cf3fc96af9965d666fc5525bdd18e99c724ac634a1b40cc9d717fc2172e97742" exitCode=137 Jan 23 14:08:17 crc kubenswrapper[4775]: I0123 14:08:17.720284 4775 generic.go:334] "Generic (PLEG): container finished" podID="945aeb53-25e2-4666-8fbe-a12be2948454" containerID="cc1b22943c56dbb624adaa13d3deaf2266f850e92f931c164c7c7ecc34724e35" exitCode=0 Jan 23 14:08:17 crc kubenswrapper[4775]: I0123 14:08:17.726829 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-stflq" event={"ID":"9f29362d-380a-46e7-b163-0ff42600d563","Type":"ContainerDied","Data":"cf3fc96af9965d666fc5525bdd18e99c724ac634a1b40cc9d717fc2172e97742"} Jan 23 14:08:17 crc kubenswrapper[4775]: I0123 14:08:17.726870 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hdhzj" event={"ID":"945aeb53-25e2-4666-8fbe-a12be2948454","Type":"ContainerDied","Data":"cc1b22943c56dbb624adaa13d3deaf2266f850e92f931c164c7c7ecc34724e35"} Jan 23 14:08:17 crc kubenswrapper[4775]: I0123 14:08:17.839868 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hdhzj" Jan 23 14:08:17 crc kubenswrapper[4775]: I0123 14:08:17.981411 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/945aeb53-25e2-4666-8fbe-a12be2948454-utilities\") pod \"945aeb53-25e2-4666-8fbe-a12be2948454\" (UID: \"945aeb53-25e2-4666-8fbe-a12be2948454\") " Jan 23 14:08:17 crc kubenswrapper[4775]: I0123 14:08:17.981476 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/945aeb53-25e2-4666-8fbe-a12be2948454-catalog-content\") pod \"945aeb53-25e2-4666-8fbe-a12be2948454\" (UID: \"945aeb53-25e2-4666-8fbe-a12be2948454\") " Jan 23 14:08:17 crc kubenswrapper[4775]: I0123 14:08:17.981538 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4wm7\" (UniqueName: \"kubernetes.io/projected/945aeb53-25e2-4666-8fbe-a12be2948454-kube-api-access-w4wm7\") pod \"945aeb53-25e2-4666-8fbe-a12be2948454\" (UID: \"945aeb53-25e2-4666-8fbe-a12be2948454\") " Jan 23 14:08:17 crc kubenswrapper[4775]: I0123 14:08:17.982747 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/945aeb53-25e2-4666-8fbe-a12be2948454-utilities" (OuterVolumeSpecName: "utilities") pod "945aeb53-25e2-4666-8fbe-a12be2948454" (UID: "945aeb53-25e2-4666-8fbe-a12be2948454"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:08:17 crc kubenswrapper[4775]: I0123 14:08:17.993162 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/945aeb53-25e2-4666-8fbe-a12be2948454-kube-api-access-w4wm7" (OuterVolumeSpecName: "kube-api-access-w4wm7") pod "945aeb53-25e2-4666-8fbe-a12be2948454" (UID: "945aeb53-25e2-4666-8fbe-a12be2948454"). InnerVolumeSpecName "kube-api-access-w4wm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.044710 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/945aeb53-25e2-4666-8fbe-a12be2948454-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "945aeb53-25e2-4666-8fbe-a12be2948454" (UID: "945aeb53-25e2-4666-8fbe-a12be2948454"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.066508 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-stflq_9f29362d-380a-46e7-b163-0ff42600d563/registry-server/0.log" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.067164 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-stflq" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.084879 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj6gc\" (UniqueName: \"kubernetes.io/projected/9f29362d-380a-46e7-b163-0ff42600d563-kube-api-access-nj6gc\") pod \"9f29362d-380a-46e7-b163-0ff42600d563\" (UID: \"9f29362d-380a-46e7-b163-0ff42600d563\") " Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.084955 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f29362d-380a-46e7-b163-0ff42600d563-utilities\") pod \"9f29362d-380a-46e7-b163-0ff42600d563\" (UID: \"9f29362d-380a-46e7-b163-0ff42600d563\") " Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.084974 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f29362d-380a-46e7-b163-0ff42600d563-catalog-content\") pod \"9f29362d-380a-46e7-b163-0ff42600d563\" (UID: \"9f29362d-380a-46e7-b163-0ff42600d563\") " Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.085139 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/945aeb53-25e2-4666-8fbe-a12be2948454-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.085151 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/945aeb53-25e2-4666-8fbe-a12be2948454-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.085161 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4wm7\" (UniqueName: \"kubernetes.io/projected/945aeb53-25e2-4666-8fbe-a12be2948454-kube-api-access-w4wm7\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.086538 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f29362d-380a-46e7-b163-0ff42600d563-utilities" (OuterVolumeSpecName: "utilities") pod "9f29362d-380a-46e7-b163-0ff42600d563" (UID: "9f29362d-380a-46e7-b163-0ff42600d563"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.120640 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f29362d-380a-46e7-b163-0ff42600d563-kube-api-access-nj6gc" (OuterVolumeSpecName: "kube-api-access-nj6gc") pod "9f29362d-380a-46e7-b163-0ff42600d563" (UID: "9f29362d-380a-46e7-b163-0ff42600d563"). InnerVolumeSpecName "kube-api-access-nj6gc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.185932 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj6gc\" (UniqueName: \"kubernetes.io/projected/9f29362d-380a-46e7-b163-0ff42600d563-kube-api-access-nj6gc\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.186242 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f29362d-380a-46e7-b163-0ff42600d563-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.197289 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f29362d-380a-46e7-b163-0ff42600d563-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9f29362d-380a-46e7-b163-0ff42600d563" (UID: "9f29362d-380a-46e7-b163-0ff42600d563"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.287350 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f29362d-380a-46e7-b163-0ff42600d563-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.336279 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2q2jj" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.390304 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2q2jj" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.728169 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hdhzj" event={"ID":"945aeb53-25e2-4666-8fbe-a12be2948454","Type":"ContainerDied","Data":"0b8e8f2a3112c9f0a5edf42bad4d4c0988004cce6f56bf24b39ad208c83c6912"} Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.728703 4775 scope.go:117] "RemoveContainer" containerID="cc1b22943c56dbb624adaa13d3deaf2266f850e92f931c164c7c7ecc34724e35" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.728574 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hdhzj" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.730963 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-stflq_9f29362d-380a-46e7-b163-0ff42600d563/registry-server/0.log" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.732858 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-stflq" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.732922 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-stflq" event={"ID":"9f29362d-380a-46e7-b163-0ff42600d563","Type":"ContainerDied","Data":"50edf2899c3c4bd4f94febab7dade88c7fd87dc6b2dfbbaffdba8627cd2c9677"} Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.744424 4775 scope.go:117] "RemoveContainer" containerID="2e0a1b0a4d9848670d528c2dac734ab723eb0475190c6d5a98e31225e9651f6d" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.760423 4775 scope.go:117] "RemoveContainer" containerID="6872f50c5369e996aaf9998a59794f18e488c47ef49db5d73fa140ee26fe751a" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.768340 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-stflq"] Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.771021 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-stflq"] Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.786186 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hdhzj"] Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.786296 4775 scope.go:117] "RemoveContainer" containerID="cf3fc96af9965d666fc5525bdd18e99c724ac634a1b40cc9d717fc2172e97742" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.791083 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hdhzj"] Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.806651 4775 scope.go:117] "RemoveContainer" containerID="183673291a9648779d425ebe1de476acbe41025abfa9eb2361ef3769370abcf7" Jan 23 14:08:18 crc kubenswrapper[4775]: I0123 14:08:18.827610 4775 scope.go:117] "RemoveContainer" containerID="8cf1d207d3c181ec1fe849262ab8dacc707e0308d2b5ce3e6df1a12ceacccc47" Jan 23 14:08:19 crc kubenswrapper[4775]: I0123 14:08:19.552655 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-285dn" Jan 23 14:08:19 crc kubenswrapper[4775]: I0123 14:08:19.593833 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-285dn" Jan 23 14:08:19 crc kubenswrapper[4775]: I0123 14:08:19.721230 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="945aeb53-25e2-4666-8fbe-a12be2948454" path="/var/lib/kubelet/pods/945aeb53-25e2-4666-8fbe-a12be2948454/volumes" Jan 23 14:08:19 crc kubenswrapper[4775]: I0123 14:08:19.721793 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f29362d-380a-46e7-b163-0ff42600d563" path="/var/lib/kubelet/pods/9f29362d-380a-46e7-b163-0ff42600d563/volumes" Jan 23 14:08:21 crc kubenswrapper[4775]: I0123 14:08:21.543597 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-84gx7" Jan 23 14:08:21 crc kubenswrapper[4775]: I0123 14:08:21.581763 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-84gx7" Jan 23 14:08:26 crc kubenswrapper[4775]: I0123 14:08:26.545075 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7fc4d79794-zptsb"] Jan 23 14:08:26 crc kubenswrapper[4775]: I0123 14:08:26.545614 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" podUID="db514f53-7687-42b7-b6bb-edc7208361d6" containerName="controller-manager" containerID="cri-o://6749598a5345ffb0fda60f9291093153566d9479b12238d34684f41edb3fc062" gracePeriod=30 Jan 23 14:08:26 crc kubenswrapper[4775]: I0123 14:08:26.650381 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp"] Jan 23 14:08:26 crc kubenswrapper[4775]: I0123 14:08:26.650711 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" podUID="303477e6-d4ac-4cbc-a088-3d7754129bd4" containerName="route-controller-manager" containerID="cri-o://75677b9b3bc9dd548b6b712ffb579a2023be7d4e1472e7d29a9986a72dbb56cd" gracePeriod=30 Jan 23 14:08:26 crc kubenswrapper[4775]: I0123 14:08:26.777794 4775 generic.go:334] "Generic (PLEG): container finished" podID="303477e6-d4ac-4cbc-a088-3d7754129bd4" containerID="75677b9b3bc9dd548b6b712ffb579a2023be7d4e1472e7d29a9986a72dbb56cd" exitCode=0 Jan 23 14:08:26 crc kubenswrapper[4775]: I0123 14:08:26.778009 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" event={"ID":"303477e6-d4ac-4cbc-a088-3d7754129bd4","Type":"ContainerDied","Data":"75677b9b3bc9dd548b6b712ffb579a2023be7d4e1472e7d29a9986a72dbb56cd"} Jan 23 14:08:26 crc kubenswrapper[4775]: I0123 14:08:26.780166 4775 generic.go:334] "Generic (PLEG): container finished" podID="db514f53-7687-42b7-b6bb-edc7208361d6" containerID="6749598a5345ffb0fda60f9291093153566d9479b12238d34684f41edb3fc062" exitCode=0 Jan 23 14:08:26 crc kubenswrapper[4775]: I0123 14:08:26.780218 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" event={"ID":"db514f53-7687-42b7-b6bb-edc7208361d6","Type":"ContainerDied","Data":"6749598a5345ffb0fda60f9291093153566d9479b12238d34684f41edb3fc062"} Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.093889 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.124586 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/303477e6-d4ac-4cbc-a088-3d7754129bd4-serving-cert\") pod \"303477e6-d4ac-4cbc-a088-3d7754129bd4\" (UID: \"303477e6-d4ac-4cbc-a088-3d7754129bd4\") " Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.124732 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/303477e6-d4ac-4cbc-a088-3d7754129bd4-client-ca\") pod \"303477e6-d4ac-4cbc-a088-3d7754129bd4\" (UID: \"303477e6-d4ac-4cbc-a088-3d7754129bd4\") " Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.124920 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8kcw\" (UniqueName: \"kubernetes.io/projected/303477e6-d4ac-4cbc-a088-3d7754129bd4-kube-api-access-l8kcw\") pod \"303477e6-d4ac-4cbc-a088-3d7754129bd4\" (UID: \"303477e6-d4ac-4cbc-a088-3d7754129bd4\") " Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.124958 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/303477e6-d4ac-4cbc-a088-3d7754129bd4-config\") pod \"303477e6-d4ac-4cbc-a088-3d7754129bd4\" (UID: \"303477e6-d4ac-4cbc-a088-3d7754129bd4\") " Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.126137 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/303477e6-d4ac-4cbc-a088-3d7754129bd4-client-ca" (OuterVolumeSpecName: "client-ca") pod "303477e6-d4ac-4cbc-a088-3d7754129bd4" (UID: "303477e6-d4ac-4cbc-a088-3d7754129bd4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.126244 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/303477e6-d4ac-4cbc-a088-3d7754129bd4-config" (OuterVolumeSpecName: "config") pod "303477e6-d4ac-4cbc-a088-3d7754129bd4" (UID: "303477e6-d4ac-4cbc-a088-3d7754129bd4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.131080 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/303477e6-d4ac-4cbc-a088-3d7754129bd4-kube-api-access-l8kcw" (OuterVolumeSpecName: "kube-api-access-l8kcw") pod "303477e6-d4ac-4cbc-a088-3d7754129bd4" (UID: "303477e6-d4ac-4cbc-a088-3d7754129bd4"). InnerVolumeSpecName "kube-api-access-l8kcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.131236 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/303477e6-d4ac-4cbc-a088-3d7754129bd4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "303477e6-d4ac-4cbc-a088-3d7754129bd4" (UID: "303477e6-d4ac-4cbc-a088-3d7754129bd4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.226956 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/303477e6-d4ac-4cbc-a088-3d7754129bd4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.226989 4775 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/303477e6-d4ac-4cbc-a088-3d7754129bd4-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.226998 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/303477e6-d4ac-4cbc-a088-3d7754129bd4-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.227007 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8kcw\" (UniqueName: \"kubernetes.io/projected/303477e6-d4ac-4cbc-a088-3d7754129bd4-kube-api-access-l8kcw\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.413339 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.431118 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db514f53-7687-42b7-b6bb-edc7208361d6-client-ca\") pod \"db514f53-7687-42b7-b6bb-edc7208361d6\" (UID: \"db514f53-7687-42b7-b6bb-edc7208361d6\") " Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.431169 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db514f53-7687-42b7-b6bb-edc7208361d6-serving-cert\") pod \"db514f53-7687-42b7-b6bb-edc7208361d6\" (UID: \"db514f53-7687-42b7-b6bb-edc7208361d6\") " Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.431271 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sf89t\" (UniqueName: \"kubernetes.io/projected/db514f53-7687-42b7-b6bb-edc7208361d6-kube-api-access-sf89t\") pod \"db514f53-7687-42b7-b6bb-edc7208361d6\" (UID: \"db514f53-7687-42b7-b6bb-edc7208361d6\") " Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.431297 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db514f53-7687-42b7-b6bb-edc7208361d6-config\") pod \"db514f53-7687-42b7-b6bb-edc7208361d6\" (UID: \"db514f53-7687-42b7-b6bb-edc7208361d6\") " Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.431356 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db514f53-7687-42b7-b6bb-edc7208361d6-proxy-ca-bundles\") pod \"db514f53-7687-42b7-b6bb-edc7208361d6\" (UID: \"db514f53-7687-42b7-b6bb-edc7208361d6\") " Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.432251 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db514f53-7687-42b7-b6bb-edc7208361d6-config" (OuterVolumeSpecName: "config") pod "db514f53-7687-42b7-b6bb-edc7208361d6" (UID: "db514f53-7687-42b7-b6bb-edc7208361d6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.432315 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db514f53-7687-42b7-b6bb-edc7208361d6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "db514f53-7687-42b7-b6bb-edc7208361d6" (UID: "db514f53-7687-42b7-b6bb-edc7208361d6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.433015 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db514f53-7687-42b7-b6bb-edc7208361d6-client-ca" (OuterVolumeSpecName: "client-ca") pod "db514f53-7687-42b7-b6bb-edc7208361d6" (UID: "db514f53-7687-42b7-b6bb-edc7208361d6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.434930 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db514f53-7687-42b7-b6bb-edc7208361d6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "db514f53-7687-42b7-b6bb-edc7208361d6" (UID: "db514f53-7687-42b7-b6bb-edc7208361d6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.439959 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db514f53-7687-42b7-b6bb-edc7208361d6-kube-api-access-sf89t" (OuterVolumeSpecName: "kube-api-access-sf89t") pod "db514f53-7687-42b7-b6bb-edc7208361d6" (UID: "db514f53-7687-42b7-b6bb-edc7208361d6"). InnerVolumeSpecName "kube-api-access-sf89t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.532641 4775 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/db514f53-7687-42b7-b6bb-edc7208361d6-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.532673 4775 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/db514f53-7687-42b7-b6bb-edc7208361d6-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.532681 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db514f53-7687-42b7-b6bb-edc7208361d6-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.532692 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sf89t\" (UniqueName: \"kubernetes.io/projected/db514f53-7687-42b7-b6bb-edc7208361d6-kube-api-access-sf89t\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.532702 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db514f53-7687-42b7-b6bb-edc7208361d6-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.785724 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" event={"ID":"303477e6-d4ac-4cbc-a088-3d7754129bd4","Type":"ContainerDied","Data":"46aebd3620f7b7059c0f06be42afa1d095d92cafdab916c70358b05e83c2baba"} Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.785775 4775 scope.go:117] "RemoveContainer" containerID="75677b9b3bc9dd548b6b712ffb579a2023be7d4e1472e7d29a9986a72dbb56cd" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.785891 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.789770 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.789772 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" event={"ID":"db514f53-7687-42b7-b6bb-edc7208361d6","Type":"ContainerDied","Data":"d9fd91d6e90c91180c8d490f7128ec362afa9bb227a9ab898100a9fcd0fc4b47"} Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.809126 4775 scope.go:117] "RemoveContainer" containerID="6749598a5345ffb0fda60f9291093153566d9479b12238d34684f41edb3fc062" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.812918 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp"] Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.815396 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-654598bdc5-jqdkp"] Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.823565 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7fc4d79794-zptsb"] Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.827507 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7fc4d79794-zptsb"] Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.995870 4775 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.996517 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca" gracePeriod=15 Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.996546 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c" gracePeriod=15 Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.996591 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185" gracePeriod=15 Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.996579 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690" gracePeriod=15 Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.996612 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792" gracePeriod=15 Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.998333 4775 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 14:08:27 crc kubenswrapper[4775]: E0123 14:08:27.998582 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f29362d-380a-46e7-b163-0ff42600d563" containerName="extract-utilities" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.998607 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f29362d-380a-46e7-b163-0ff42600d563" containerName="extract-utilities" Jan 23 14:08:27 crc kubenswrapper[4775]: E0123 14:08:27.998617 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.998627 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 23 14:08:27 crc kubenswrapper[4775]: E0123 14:08:27.998640 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="303477e6-d4ac-4cbc-a088-3d7754129bd4" containerName="route-controller-manager" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.998650 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="303477e6-d4ac-4cbc-a088-3d7754129bd4" containerName="route-controller-manager" Jan 23 14:08:27 crc kubenswrapper[4775]: E0123 14:08:27.998660 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="945aeb53-25e2-4666-8fbe-a12be2948454" containerName="extract-content" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.998667 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="945aeb53-25e2-4666-8fbe-a12be2948454" containerName="extract-content" Jan 23 14:08:27 crc kubenswrapper[4775]: E0123 14:08:27.998677 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.998685 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 23 14:08:27 crc kubenswrapper[4775]: E0123 14:08:27.998697 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a627ae2-fe8d-403e-9d14-3c3ace588da5" containerName="registry-server" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.998704 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a627ae2-fe8d-403e-9d14-3c3ace588da5" containerName="registry-server" Jan 23 14:08:27 crc kubenswrapper[4775]: E0123 14:08:27.998711 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a627ae2-fe8d-403e-9d14-3c3ace588da5" containerName="extract-utilities" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.998719 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a627ae2-fe8d-403e-9d14-3c3ace588da5" containerName="extract-utilities" Jan 23 14:08:27 crc kubenswrapper[4775]: E0123 14:08:27.998727 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25e2625-85e2-4f61-a654-347c5d111fc2" containerName="registry-server" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.998734 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25e2625-85e2-4f61-a654-347c5d111fc2" containerName="registry-server" Jan 23 14:08:27 crc kubenswrapper[4775]: E0123 14:08:27.998744 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db514f53-7687-42b7-b6bb-edc7208361d6" containerName="controller-manager" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.998751 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="db514f53-7687-42b7-b6bb-edc7208361d6" containerName="controller-manager" Jan 23 14:08:27 crc kubenswrapper[4775]: E0123 14:08:27.998761 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.998770 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 23 14:08:27 crc kubenswrapper[4775]: E0123 14:08:27.998783 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25e2625-85e2-4f61-a654-347c5d111fc2" containerName="extract-utilities" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.998791 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25e2625-85e2-4f61-a654-347c5d111fc2" containerName="extract-utilities" Jan 23 14:08:27 crc kubenswrapper[4775]: E0123 14:08:27.998821 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f29362d-380a-46e7-b163-0ff42600d563" containerName="registry-server" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.998830 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f29362d-380a-46e7-b163-0ff42600d563" containerName="registry-server" Jan 23 14:08:27 crc kubenswrapper[4775]: E0123 14:08:27.998840 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.998847 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 23 14:08:27 crc kubenswrapper[4775]: E0123 14:08:27.998859 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f29362d-380a-46e7-b163-0ff42600d563" containerName="extract-content" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.998867 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f29362d-380a-46e7-b163-0ff42600d563" containerName="extract-content" Jan 23 14:08:27 crc kubenswrapper[4775]: E0123 14:08:27.998877 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a627ae2-fe8d-403e-9d14-3c3ace588da5" containerName="extract-content" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.998885 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a627ae2-fe8d-403e-9d14-3c3ace588da5" containerName="extract-content" Jan 23 14:08:27 crc kubenswrapper[4775]: E0123 14:08:27.998899 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="945aeb53-25e2-4666-8fbe-a12be2948454" containerName="extract-utilities" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.998909 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="945aeb53-25e2-4666-8fbe-a12be2948454" containerName="extract-utilities" Jan 23 14:08:27 crc kubenswrapper[4775]: E0123 14:08:27.998918 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.998925 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 23 14:08:27 crc kubenswrapper[4775]: E0123 14:08:27.998935 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.998943 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 14:08:27 crc kubenswrapper[4775]: E0123 14:08:27.998951 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0" containerName="pruner" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.998959 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0" containerName="pruner" Jan 23 14:08:27 crc kubenswrapper[4775]: E0123 14:08:27.998969 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="945aeb53-25e2-4666-8fbe-a12be2948454" containerName="registry-server" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.998977 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="945aeb53-25e2-4666-8fbe-a12be2948454" containerName="registry-server" Jan 23 14:08:27 crc kubenswrapper[4775]: E0123 14:08:27.998992 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25e2625-85e2-4f61-a654-347c5d111fc2" containerName="extract-content" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.999000 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25e2625-85e2-4f61-a654-347c5d111fc2" containerName="extract-content" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.999102 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.999111 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="db514f53-7687-42b7-b6bb-edc7208361d6" containerName="controller-manager" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.999120 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.999129 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a627ae2-fe8d-403e-9d14-3c3ace588da5" containerName="registry-server" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.999140 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="a25e2625-85e2-4f61-a654-347c5d111fc2" containerName="registry-server" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.999150 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="303477e6-d4ac-4cbc-a088-3d7754129bd4" containerName="route-controller-manager" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.999162 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="df7b9cd0-70a4-4d8b-ba6d-47096b2bb7a0" containerName="pruner" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.999175 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="945aeb53-25e2-4666-8fbe-a12be2948454" containerName="registry-server" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.999183 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.999193 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f29362d-380a-46e7-b163-0ff42600d563" containerName="registry-server" Jan 23 14:08:27 crc kubenswrapper[4775]: I0123 14:08:27.999207 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:27.999217 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 23 14:08:28 crc kubenswrapper[4775]: E0123 14:08:27.999319 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:27.999329 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:27.999437 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.000353 4775 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.000811 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.005989 4775 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.012948 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq"] Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.013917 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.016666 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.016946 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.016671 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.017675 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.018120 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.018364 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.025901 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-f759bc488-r96ss"] Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.026776 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.036851 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f759bc488-r96ss"] Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.040570 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq"] Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.043252 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.044117 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.046099 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.046124 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.046157 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.046204 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr2rr\" (UniqueName: \"kubernetes.io/projected/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-kube-api-access-vr2rr\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.046236 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.046263 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5caa98-bd54-485f-a11e-46a25c98f82f-config\") pod \"route-controller-manager-544cdfc94f-mdfkq\" (UID: \"ff5caa98-bd54-485f-a11e-46a25c98f82f\") " pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.046291 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wfb4\" (UniqueName: \"kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4\") pod \"route-controller-manager-544cdfc94f-mdfkq\" (UID: \"ff5caa98-bd54-485f-a11e-46a25c98f82f\") " pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.046412 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.046441 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff5caa98-bd54-485f-a11e-46a25c98f82f-serving-cert\") pod \"route-controller-manager-544cdfc94f-mdfkq\" (UID: \"ff5caa98-bd54-485f-a11e-46a25c98f82f\") " pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.046464 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff5caa98-bd54-485f-a11e-46a25c98f82f-client-ca\") pod \"route-controller-manager-544cdfc94f-mdfkq\" (UID: \"ff5caa98-bd54-485f-a11e-46a25c98f82f\") " pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.046523 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.046580 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-serving-cert\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.046610 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.046633 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.046676 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.147342 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-serving-cert\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.147389 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.147416 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.147447 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.147470 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.147544 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.147618 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.147570 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.147596 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.147713 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.147694 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.147570 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.147743 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.147772 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.147819 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr2rr\" (UniqueName: \"kubernetes.io/projected/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-kube-api-access-vr2rr\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.147844 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.147866 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5caa98-bd54-485f-a11e-46a25c98f82f-config\") pod \"route-controller-manager-544cdfc94f-mdfkq\" (UID: \"ff5caa98-bd54-485f-a11e-46a25c98f82f\") " pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.147873 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.147883 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wfb4\" (UniqueName: \"kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4\") pod \"route-controller-manager-544cdfc94f-mdfkq\" (UID: \"ff5caa98-bd54-485f-a11e-46a25c98f82f\") " pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.147915 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.147943 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.147962 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff5caa98-bd54-485f-a11e-46a25c98f82f-client-ca\") pod \"route-controller-manager-544cdfc94f-mdfkq\" (UID: \"ff5caa98-bd54-485f-a11e-46a25c98f82f\") " pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.148055 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff5caa98-bd54-485f-a11e-46a25c98f82f-serving-cert\") pod \"route-controller-manager-544cdfc94f-mdfkq\" (UID: \"ff5caa98-bd54-485f-a11e-46a25c98f82f\") " pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.148116 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.148195 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 14:08:28 crc kubenswrapper[4775]: E0123 14:08:28.148462 4775 projected.go:194] Error preparing data for projected volume kube-api-access-2wfb4 for pod openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq: failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:28 crc kubenswrapper[4775]: E0123 14:08:28.148534 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4 podName:ff5caa98-bd54-485f-a11e-46a25c98f82f nodeName:}" failed. No retries permitted until 2026-01-23 14:08:28.648516737 +0000 UTC m=+255.643345567 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2wfb4" (UniqueName: "kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4") pod "route-controller-manager-544cdfc94f-mdfkq" (UID: "ff5caa98-bd54-485f-a11e-46a25c98f82f") : failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:28 crc kubenswrapper[4775]: E0123 14:08:28.148828 4775 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/events\": dial tcp 38.102.83.177:6443: connect: connection refused" event="&Event{ObjectMeta:{route-controller-manager-544cdfc94f-mdfkq.188d616364dfaafd openshift-route-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-route-controller-manager,Name:route-controller-manager-544cdfc94f-mdfkq,UID:ff5caa98-bd54-485f-a11e-46a25c98f82f,APIVersion:v1,ResourceVersion:29848,FieldPath:,},Reason:FailedMount,Message:MountVolume.SetUp failed for volume \"kube-api-access-2wfb4\" : failed to fetch token: Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token\": dial tcp 38.102.83.177:6443: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 14:08:28.148509437 +0000 UTC m=+255.143338177,LastTimestamp:2026-01-23 14:08:28.148509437 +0000 UTC m=+255.143338177,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.148933 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff5caa98-bd54-485f-a11e-46a25c98f82f-client-ca\") pod \"route-controller-manager-544cdfc94f-mdfkq\" (UID: \"ff5caa98-bd54-485f-a11e-46a25c98f82f\") " pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.149662 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5caa98-bd54-485f-a11e-46a25c98f82f-config\") pod \"route-controller-manager-544cdfc94f-mdfkq\" (UID: \"ff5caa98-bd54-485f-a11e-46a25c98f82f\") " pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.162642 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff5caa98-bd54-485f-a11e-46a25c98f82f-serving-cert\") pod \"route-controller-manager-544cdfc94f-mdfkq\" (UID: \"ff5caa98-bd54-485f-a11e-46a25c98f82f\") " pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.328584 4775 patch_prober.go:28] interesting pod/controller-manager-7fc4d79794-zptsb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.328658 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7fc4d79794-zptsb" podUID="db514f53-7687-42b7-b6bb-edc7208361d6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.655464 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wfb4\" (UniqueName: \"kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4\") pod \"route-controller-manager-544cdfc94f-mdfkq\" (UID: \"ff5caa98-bd54-485f-a11e-46a25c98f82f\") " pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:08:28 crc kubenswrapper[4775]: E0123 14:08:28.656291 4775 projected.go:194] Error preparing data for projected volume kube-api-access-2wfb4 for pod openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq: failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:28 crc kubenswrapper[4775]: E0123 14:08:28.656423 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4 podName:ff5caa98-bd54-485f-a11e-46a25c98f82f nodeName:}" failed. No retries permitted until 2026-01-23 14:08:29.656403601 +0000 UTC m=+256.651232341 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-2wfb4" (UniqueName: "kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4") pod "route-controller-manager-544cdfc94f-mdfkq" (UID: "ff5caa98-bd54-485f-a11e-46a25c98f82f") : failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.797446 4775 generic.go:334] "Generic (PLEG): container finished" podID="b0d34b3f-ebda-4e48-82ec-36db9214c42a" containerID="0c28974bf5aa3d2045f7f01151a0a690db3102172d533985bc3f349a477cc135" exitCode=0 Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.797555 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b0d34b3f-ebda-4e48-82ec-36db9214c42a","Type":"ContainerDied","Data":"0c28974bf5aa3d2045f7f01151a0a690db3102172d533985bc3f349a477cc135"} Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.800386 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.801835 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.802450 4775 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c" exitCode=0 Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.802481 4775 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792" exitCode=0 Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.802491 4775 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690" exitCode=0 Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.802501 4775 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185" exitCode=2 Jan 23 14:08:28 crc kubenswrapper[4775]: I0123 14:08:28.802579 4775 scope.go:117] "RemoveContainer" containerID="f34355755723c61ad662e1eff002b3adf36a9346efc0025be36cbe1e13ae5eb2" Jan 23 14:08:29 crc kubenswrapper[4775]: E0123 14:08:29.148553 4775 secret.go:188] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 23 14:08:29 crc kubenswrapper[4775]: E0123 14:08:29.148608 4775 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:29 crc kubenswrapper[4775]: E0123 14:08:29.148662 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-serving-cert podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:29.648636816 +0000 UTC m=+256.643465556 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-serving-cert") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync secret cache: timed out waiting for the condition Jan 23 14:08:29 crc kubenswrapper[4775]: E0123 14:08:29.148706 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:29.648682137 +0000 UTC m=+256.643510898 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:29 crc kubenswrapper[4775]: E0123 14:08:29.148727 4775 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:29 crc kubenswrapper[4775]: E0123 14:08:29.148861 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:29.648844703 +0000 UTC m=+256.643673523 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:29 crc kubenswrapper[4775]: E0123 14:08:29.148739 4775 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:29 crc kubenswrapper[4775]: E0123 14:08:29.148776 4775 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:29 crc kubenswrapper[4775]: E0123 14:08:29.148953 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:29.648941336 +0000 UTC m=+256.643770196 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:29 crc kubenswrapper[4775]: I0123 14:08:29.670960 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-serving-cert\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:29 crc kubenswrapper[4775]: I0123 14:08:29.671022 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:29 crc kubenswrapper[4775]: I0123 14:08:29.671058 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:29 crc kubenswrapper[4775]: I0123 14:08:29.671112 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wfb4\" (UniqueName: \"kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4\") pod \"route-controller-manager-544cdfc94f-mdfkq\" (UID: \"ff5caa98-bd54-485f-a11e-46a25c98f82f\") " pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:08:29 crc kubenswrapper[4775]: I0123 14:08:29.671143 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:29 crc kubenswrapper[4775]: E0123 14:08:29.671784 4775 projected.go:194] Error preparing data for projected volume kube-api-access-2wfb4 for pod openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq: failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:29 crc kubenswrapper[4775]: E0123 14:08:29.671870 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4 podName:ff5caa98-bd54-485f-a11e-46a25c98f82f nodeName:}" failed. No retries permitted until 2026-01-23 14:08:31.671849239 +0000 UTC m=+258.666677969 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-2wfb4" (UniqueName: "kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4") pod "route-controller-manager-544cdfc94f-mdfkq" (UID: "ff5caa98-bd54-485f-a11e-46a25c98f82f") : failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:29 crc kubenswrapper[4775]: I0123 14:08:29.722363 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="303477e6-d4ac-4cbc-a088-3d7754129bd4" path="/var/lib/kubelet/pods/303477e6-d4ac-4cbc-a088-3d7754129bd4/volumes" Jan 23 14:08:29 crc kubenswrapper[4775]: I0123 14:08:29.723388 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db514f53-7687-42b7-b6bb-edc7208361d6" path="/var/lib/kubelet/pods/db514f53-7687-42b7-b6bb-edc7208361d6/volumes" Jan 23 14:08:29 crc kubenswrapper[4775]: E0123 14:08:29.790149 4775 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/events\": dial tcp 38.102.83.177:6443: connect: connection refused" event="&Event{ObjectMeta:{route-controller-manager-544cdfc94f-mdfkq.188d616364dfaafd openshift-route-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-route-controller-manager,Name:route-controller-manager-544cdfc94f-mdfkq,UID:ff5caa98-bd54-485f-a11e-46a25c98f82f,APIVersion:v1,ResourceVersion:29848,FieldPath:,},Reason:FailedMount,Message:MountVolume.SetUp failed for volume \"kube-api-access-2wfb4\" : failed to fetch token: Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token\": dial tcp 38.102.83.177:6443: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 14:08:28.148509437 +0000 UTC m=+255.143338177,LastTimestamp:2026-01-23 14:08:28.148509437 +0000 UTC m=+255.143338177,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 14:08:29 crc kubenswrapper[4775]: I0123 14:08:29.812999 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.152070 4775 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.152395 4775 projected.go:194] Error preparing data for projected volume kube-api-access-vr2rr for pod openshift-controller-manager/controller-manager-f759bc488-r96ss: [failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token": dial tcp 38.102.83.177:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.152470 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-kube-api-access-vr2rr podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:30.652451031 +0000 UTC m=+257.647279771 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vr2rr" (UniqueName: "kubernetes.io/projected/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-kube-api-access-vr2rr") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : [failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token": dial tcp 38.102.83.177:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.224266 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.279956 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0d34b3f-ebda-4e48-82ec-36db9214c42a-kube-api-access\") pod \"b0d34b3f-ebda-4e48-82ec-36db9214c42a\" (UID: \"b0d34b3f-ebda-4e48-82ec-36db9214c42a\") " Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.280056 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b0d34b3f-ebda-4e48-82ec-36db9214c42a-var-lock\") pod \"b0d34b3f-ebda-4e48-82ec-36db9214c42a\" (UID: \"b0d34b3f-ebda-4e48-82ec-36db9214c42a\") " Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.280119 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0d34b3f-ebda-4e48-82ec-36db9214c42a-kubelet-dir\") pod \"b0d34b3f-ebda-4e48-82ec-36db9214c42a\" (UID: \"b0d34b3f-ebda-4e48-82ec-36db9214c42a\") " Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.280624 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0d34b3f-ebda-4e48-82ec-36db9214c42a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b0d34b3f-ebda-4e48-82ec-36db9214c42a" (UID: "b0d34b3f-ebda-4e48-82ec-36db9214c42a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.280702 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0d34b3f-ebda-4e48-82ec-36db9214c42a-var-lock" (OuterVolumeSpecName: "var-lock") pod "b0d34b3f-ebda-4e48-82ec-36db9214c42a" (UID: "b0d34b3f-ebda-4e48-82ec-36db9214c42a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.293221 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0d34b3f-ebda-4e48-82ec-36db9214c42a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b0d34b3f-ebda-4e48-82ec-36db9214c42a" (UID: "b0d34b3f-ebda-4e48-82ec-36db9214c42a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.351667 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.352605 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.381500 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.381570 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.381590 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.381658 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.381690 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.381782 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.381949 4775 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b0d34b3f-ebda-4e48-82ec-36db9214c42a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.381961 4775 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.381969 4775 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.381977 4775 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0d34b3f-ebda-4e48-82ec-36db9214c42a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.381985 4775 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.381994 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0d34b3f-ebda-4e48-82ec-36db9214c42a-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.672052 4775 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.672785 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:31.672754892 +0000 UTC m=+258.667583672 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.672089 4775 secret.go:188] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.672109 4775 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.672127 4775 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.673217 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-serving-cert podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:31.673198866 +0000 UTC m=+258.668027646 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-serving-cert") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync secret cache: timed out waiting for the condition Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.673370 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:31.67334492 +0000 UTC m=+258.668173700 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.673408 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:31.673394352 +0000 UTC m=+258.668223132 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.686453 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr2rr\" (UniqueName: \"kubernetes.io/projected/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-kube-api-access-vr2rr\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.794504 4775 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.795096 4775 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.795731 4775 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.796124 4775 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.796530 4775 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.796705 4775 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.797270 4775 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="200ms" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.822436 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.823395 4775 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca" exitCode=0 Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.823475 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.823502 4775 scope.go:117] "RemoveContainer" containerID="0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.826075 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b0d34b3f-ebda-4e48-82ec-36db9214c42a","Type":"ContainerDied","Data":"50a3207c43535211cc781efbf364abe05d4043fb9f6a837131123ef8444aee37"} Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.826122 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50a3207c43535211cc781efbf364abe05d4043fb9f6a837131123ef8444aee37" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.826209 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.854211 4775 scope.go:117] "RemoveContainer" containerID="a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.876447 4775 scope.go:117] "RemoveContainer" containerID="84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.896908 4775 scope.go:117] "RemoveContainer" containerID="cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.919878 4775 scope.go:117] "RemoveContainer" containerID="11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.940850 4775 scope.go:117] "RemoveContainer" containerID="039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.961704 4775 scope.go:117] "RemoveContainer" containerID="0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c" Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.962209 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\": container with ID starting with 0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c not found: ID does not exist" containerID="0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.962255 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c"} err="failed to get container status \"0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\": rpc error: code = NotFound desc = could not find container \"0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c\": container with ID starting with 0b9ad0faeccae5891c2ba0c9677811a550a657e3363502e75f91d761a79d9c4c not found: ID does not exist" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.962292 4775 scope.go:117] "RemoveContainer" containerID="a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792" Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.963239 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\": container with ID starting with a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792 not found: ID does not exist" containerID="a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.963267 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792"} err="failed to get container status \"a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\": rpc error: code = NotFound desc = could not find container \"a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792\": container with ID starting with a2ebe4084ba6bbbde4ff9e6f98ffb44b2e9d549ef04ba2bbb40f5fcdee2da792 not found: ID does not exist" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.963281 4775 scope.go:117] "RemoveContainer" containerID="84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690" Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.964331 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\": container with ID starting with 84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690 not found: ID does not exist" containerID="84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.964356 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690"} err="failed to get container status \"84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\": rpc error: code = NotFound desc = could not find container \"84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690\": container with ID starting with 84b740dc491796432e9e44aad087fe3e60aa7fe6796c7bc2b91d34ddaa70a690 not found: ID does not exist" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.964405 4775 scope.go:117] "RemoveContainer" containerID="cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185" Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.964914 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\": container with ID starting with cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185 not found: ID does not exist" containerID="cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.964937 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185"} err="failed to get container status \"cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\": rpc error: code = NotFound desc = could not find container \"cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185\": container with ID starting with cd92d28403ef36cb15270024d3445eadc6c0febbed5fac7be90146604b599185 not found: ID does not exist" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.964952 4775 scope.go:117] "RemoveContainer" containerID="11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca" Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.965553 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\": container with ID starting with 11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca not found: ID does not exist" containerID="11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.965587 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca"} err="failed to get container status \"11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\": rpc error: code = NotFound desc = could not find container \"11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca\": container with ID starting with 11d981f767ec99144f9ddd06bc492ddff4929a1d62bc2e7c9ba70f8a9764eaca not found: ID does not exist" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.965604 4775 scope.go:117] "RemoveContainer" containerID="039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53" Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.965990 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\": container with ID starting with 039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53 not found: ID does not exist" containerID="039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53" Jan 23 14:08:30 crc kubenswrapper[4775]: I0123 14:08:30.966022 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53"} err="failed to get container status \"039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\": rpc error: code = NotFound desc = could not find container \"039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53\": container with ID starting with 039d33f8e53314af81c50197faf92d23d5ffcb0dfc8e766094f24143d573bc53 not found: ID does not exist" Jan 23 14:08:30 crc kubenswrapper[4775]: E0123 14:08:30.998514 4775 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="400ms" Jan 23 14:08:31 crc kubenswrapper[4775]: E0123 14:08:31.399866 4775 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="800ms" Jan 23 14:08:31 crc kubenswrapper[4775]: E0123 14:08:31.687938 4775 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:31 crc kubenswrapper[4775]: I0123 14:08:31.699211 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wfb4\" (UniqueName: \"kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4\") pod \"route-controller-manager-544cdfc94f-mdfkq\" (UID: \"ff5caa98-bd54-485f-a11e-46a25c98f82f\") " pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:08:31 crc kubenswrapper[4775]: I0123 14:08:31.699258 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:31 crc kubenswrapper[4775]: I0123 14:08:31.699321 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-serving-cert\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:31 crc kubenswrapper[4775]: I0123 14:08:31.699357 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:31 crc kubenswrapper[4775]: I0123 14:08:31.699387 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:31 crc kubenswrapper[4775]: E0123 14:08:31.699645 4775 projected.go:194] Error preparing data for projected volume kube-api-access-2wfb4 for pod openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq: failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:31 crc kubenswrapper[4775]: E0123 14:08:31.699698 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4 podName:ff5caa98-bd54-485f-a11e-46a25c98f82f nodeName:}" failed. No retries permitted until 2026-01-23 14:08:35.699685348 +0000 UTC m=+262.694514088 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-2wfb4" (UniqueName: "kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4") pod "route-controller-manager-544cdfc94f-mdfkq" (UID: "ff5caa98-bd54-485f-a11e-46a25c98f82f") : failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:31 crc kubenswrapper[4775]: I0123 14:08:31.719623 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 23 14:08:32 crc kubenswrapper[4775]: E0123 14:08:32.200758 4775 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="1.6s" Jan 23 14:08:32 crc kubenswrapper[4775]: E0123 14:08:32.688375 4775 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:32 crc kubenswrapper[4775]: E0123 14:08:32.688435 4775 projected.go:194] Error preparing data for projected volume kube-api-access-vr2rr for pod openshift-controller-manager/controller-manager-f759bc488-r96ss: [failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token": dial tcp 38.102.83.177:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Jan 23 14:08:32 crc kubenswrapper[4775]: E0123 14:08:32.688547 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-kube-api-access-vr2rr podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:33.688518725 +0000 UTC m=+260.683347495 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-vr2rr" (UniqueName: "kubernetes.io/projected/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-kube-api-access-vr2rr") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : [failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token": dial tcp 38.102.83.177:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Jan 23 14:08:32 crc kubenswrapper[4775]: E0123 14:08:32.700110 4775 secret.go:188] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 23 14:08:32 crc kubenswrapper[4775]: E0123 14:08:32.700160 4775 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:32 crc kubenswrapper[4775]: E0123 14:08:32.700164 4775 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:32 crc kubenswrapper[4775]: E0123 14:08:32.700191 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-serving-cert podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:34.700171819 +0000 UTC m=+261.695000589 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-serving-cert") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync secret cache: timed out waiting for the condition Jan 23 14:08:32 crc kubenswrapper[4775]: E0123 14:08:32.700124 4775 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:32 crc kubenswrapper[4775]: E0123 14:08:32.700237 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:34.700217681 +0000 UTC m=+261.695046421 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:32 crc kubenswrapper[4775]: E0123 14:08:32.700252 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:34.700246562 +0000 UTC m=+261.695075302 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:32 crc kubenswrapper[4775]: E0123 14:08:32.700266 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:34.700258282 +0000 UTC m=+261.695087022 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:33 crc kubenswrapper[4775]: W0123 14:08:33.029546 4775 reflector.go:561] object-"openshift-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:33 crc kubenswrapper[4775]: E0123 14:08:33.029682 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:33 crc kubenswrapper[4775]: W0123 14:08:33.041504 4775 reflector.go:561] object-"openshift-controller-manager"/"openshift-global-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-global-ca&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:33 crc kubenswrapper[4775]: W0123 14:08:33.041715 4775 reflector.go:561] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-sa-dockercfg-msq4c&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:33 crc kubenswrapper[4775]: W0123 14:08:33.041528 4775 reflector.go:561] object-"openshift-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:33 crc kubenswrapper[4775]: E0123 14:08:33.041743 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-global-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-global-ca&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:33 crc kubenswrapper[4775]: E0123 14:08:33.042027 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-msq4c\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-sa-dockercfg-msq4c&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:33 crc kubenswrapper[4775]: W0123 14:08:33.041711 4775 reflector.go:561] object-"openshift-controller-manager"/"client-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:33 crc kubenswrapper[4775]: E0123 14:08:33.042256 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:33 crc kubenswrapper[4775]: W0123 14:08:33.041711 4775 reflector.go:561] object-"openshift-controller-manager"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:33 crc kubenswrapper[4775]: E0123 14:08:33.042336 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:33 crc kubenswrapper[4775]: W0123 14:08:33.041531 4775 reflector.go:561] object-"openshift-controller-manager"/"config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:33 crc kubenswrapper[4775]: E0123 14:08:33.042376 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:33 crc kubenswrapper[4775]: E0123 14:08:33.042116 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:33 crc kubenswrapper[4775]: I0123 14:08:33.047735 4775 status_manager.go:851] "Failed to get status for pod" podUID="1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-f759bc488-r96ss\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:33 crc kubenswrapper[4775]: I0123 14:08:33.048424 4775 status_manager.go:851] "Failed to get status for pod" podUID="b0d34b3f-ebda-4e48-82ec-36db9214c42a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:33 crc kubenswrapper[4775]: I0123 14:08:33.049124 4775 status_manager.go:851] "Failed to get status for pod" podUID="1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-f759bc488-r96ss\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:33 crc kubenswrapper[4775]: E0123 14:08:33.058359 4775 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.177:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 14:08:33 crc kubenswrapper[4775]: I0123 14:08:33.058982 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 14:08:33 crc kubenswrapper[4775]: I0123 14:08:33.718092 4775 status_manager.go:851] "Failed to get status for pod" podUID="b0d34b3f-ebda-4e48-82ec-36db9214c42a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:33 crc kubenswrapper[4775]: I0123 14:08:33.719095 4775 status_manager.go:851] "Failed to get status for pod" podUID="1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-f759bc488-r96ss\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:33 crc kubenswrapper[4775]: I0123 14:08:33.724836 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr2rr\" (UniqueName: \"kubernetes.io/projected/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-kube-api-access-vr2rr\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:33 crc kubenswrapper[4775]: E0123 14:08:33.802500 4775 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="3.2s" Jan 23 14:08:33 crc kubenswrapper[4775]: I0123 14:08:33.803023 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" podUID="3066d31d-92a4-45a7-b368-ba66d5689456" containerName="oauth-openshift" containerID="cri-o://b55e2c335cddf1f1e9c9202e83c490ce85712c353fa0cf36a620dab99d97659f" gracePeriod=15 Jan 23 14:08:33 crc kubenswrapper[4775]: I0123 14:08:33.847719 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"aff268ac61a1e94757e586a2d154e2ae45702e5030a24a5cd4532578fe0a281b"} Jan 23 14:08:33 crc kubenswrapper[4775]: I0123 14:08:33.847884 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"6f7183e40e07981a30306e2dbf34ad8d9e3471d4eb8c3a38fc292f3ddd0da04b"} Jan 23 14:08:33 crc kubenswrapper[4775]: I0123 14:08:33.849201 4775 status_manager.go:851] "Failed to get status for pod" podUID="b0d34b3f-ebda-4e48-82ec-36db9214c42a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:33 crc kubenswrapper[4775]: E0123 14:08:33.849416 4775 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.177:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 14:08:33 crc kubenswrapper[4775]: I0123 14:08:33.849670 4775 status_manager.go:851] "Failed to get status for pod" podUID="1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-f759bc488-r96ss\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:33 crc kubenswrapper[4775]: W0123 14:08:33.885626 4775 reflector.go:561] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-sa-dockercfg-msq4c&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:33 crc kubenswrapper[4775]: E0123 14:08:33.885684 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-msq4c\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-sa-dockercfg-msq4c&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:34 crc kubenswrapper[4775]: W0123 14:08:34.038247 4775 reflector.go:561] object-"openshift-controller-manager"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:34 crc kubenswrapper[4775]: E0123 14:08:34.038349 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:34 crc kubenswrapper[4775]: W0123 14:08:34.045104 4775 reflector.go:561] object-"openshift-controller-manager"/"openshift-global-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-global-ca&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:34 crc kubenswrapper[4775]: E0123 14:08:34.045180 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-global-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-global-ca&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:34 crc kubenswrapper[4775]: W0123 14:08:34.051097 4775 reflector.go:561] object-"openshift-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:34 crc kubenswrapper[4775]: E0123 14:08:34.051281 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:34 crc kubenswrapper[4775]: W0123 14:08:34.054771 4775 reflector.go:561] object-"openshift-controller-manager"/"client-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:34 crc kubenswrapper[4775]: E0123 14:08:34.054861 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.191391 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.192175 4775 status_manager.go:851] "Failed to get status for pod" podUID="b0d34b3f-ebda-4e48-82ec-36db9214c42a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.192514 4775 status_manager.go:851] "Failed to get status for pod" podUID="3066d31d-92a4-45a7-b368-ba66d5689456" pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-4q8mj\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.194014 4775 status_manager.go:851] "Failed to get status for pod" podUID="1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-f759bc488-r96ss\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:34 crc kubenswrapper[4775]: W0123 14:08:34.235225 4775 reflector.go:561] object-"openshift-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:34 crc kubenswrapper[4775]: E0123 14:08:34.235277 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.332039 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-template-error\") pod \"3066d31d-92a4-45a7-b368-ba66d5689456\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.332130 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-template-login\") pod \"3066d31d-92a4-45a7-b368-ba66d5689456\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.332164 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-template-provider-selection\") pod \"3066d31d-92a4-45a7-b368-ba66d5689456\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.332203 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-service-ca\") pod \"3066d31d-92a4-45a7-b368-ba66d5689456\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.332233 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-router-certs\") pod \"3066d31d-92a4-45a7-b368-ba66d5689456\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.332264 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-cliconfig\") pod \"3066d31d-92a4-45a7-b368-ba66d5689456\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.332295 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3066d31d-92a4-45a7-b368-ba66d5689456-audit-dir\") pod \"3066d31d-92a4-45a7-b368-ba66d5689456\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.332339 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-idp-0-file-data\") pod \"3066d31d-92a4-45a7-b368-ba66d5689456\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.332372 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6js2\" (UniqueName: \"kubernetes.io/projected/3066d31d-92a4-45a7-b368-ba66d5689456-kube-api-access-p6js2\") pod \"3066d31d-92a4-45a7-b368-ba66d5689456\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.332412 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-serving-cert\") pod \"3066d31d-92a4-45a7-b368-ba66d5689456\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.332436 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-audit-policies\") pod \"3066d31d-92a4-45a7-b368-ba66d5689456\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.332490 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-session\") pod \"3066d31d-92a4-45a7-b368-ba66d5689456\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.332537 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-trusted-ca-bundle\") pod \"3066d31d-92a4-45a7-b368-ba66d5689456\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.332575 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-ocp-branding-template\") pod \"3066d31d-92a4-45a7-b368-ba66d5689456\" (UID: \"3066d31d-92a4-45a7-b368-ba66d5689456\") " Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.332832 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3066d31d-92a4-45a7-b368-ba66d5689456-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3066d31d-92a4-45a7-b368-ba66d5689456" (UID: "3066d31d-92a4-45a7-b368-ba66d5689456"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.333022 4775 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3066d31d-92a4-45a7-b368-ba66d5689456-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.334216 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "3066d31d-92a4-45a7-b368-ba66d5689456" (UID: "3066d31d-92a4-45a7-b368-ba66d5689456"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.334297 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "3066d31d-92a4-45a7-b368-ba66d5689456" (UID: "3066d31d-92a4-45a7-b368-ba66d5689456"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.334525 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "3066d31d-92a4-45a7-b368-ba66d5689456" (UID: "3066d31d-92a4-45a7-b368-ba66d5689456"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.334895 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "3066d31d-92a4-45a7-b368-ba66d5689456" (UID: "3066d31d-92a4-45a7-b368-ba66d5689456"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.339488 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3066d31d-92a4-45a7-b368-ba66d5689456-kube-api-access-p6js2" (OuterVolumeSpecName: "kube-api-access-p6js2") pod "3066d31d-92a4-45a7-b368-ba66d5689456" (UID: "3066d31d-92a4-45a7-b368-ba66d5689456"). InnerVolumeSpecName "kube-api-access-p6js2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.339506 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "3066d31d-92a4-45a7-b368-ba66d5689456" (UID: "3066d31d-92a4-45a7-b368-ba66d5689456"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.339957 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "3066d31d-92a4-45a7-b368-ba66d5689456" (UID: "3066d31d-92a4-45a7-b368-ba66d5689456"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.340242 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "3066d31d-92a4-45a7-b368-ba66d5689456" (UID: "3066d31d-92a4-45a7-b368-ba66d5689456"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.340421 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "3066d31d-92a4-45a7-b368-ba66d5689456" (UID: "3066d31d-92a4-45a7-b368-ba66d5689456"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.340653 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "3066d31d-92a4-45a7-b368-ba66d5689456" (UID: "3066d31d-92a4-45a7-b368-ba66d5689456"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.340888 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "3066d31d-92a4-45a7-b368-ba66d5689456" (UID: "3066d31d-92a4-45a7-b368-ba66d5689456"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.341118 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "3066d31d-92a4-45a7-b368-ba66d5689456" (UID: "3066d31d-92a4-45a7-b368-ba66d5689456"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.341245 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "3066d31d-92a4-45a7-b368-ba66d5689456" (UID: "3066d31d-92a4-45a7-b368-ba66d5689456"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.433877 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.433923 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.433938 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.433953 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6js2\" (UniqueName: \"kubernetes.io/projected/3066d31d-92a4-45a7-b368-ba66d5689456-kube-api-access-p6js2\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.433965 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.433977 4775 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.433992 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.434005 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.434017 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.434030 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.434042 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.434055 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.434068 4775 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3066d31d-92a4-45a7-b368-ba66d5689456-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:08:34 crc kubenswrapper[4775]: W0123 14:08:34.481976 4775 reflector.go:561] object-"openshift-controller-manager"/"config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:34 crc kubenswrapper[4775]: E0123 14:08:34.482042 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:34 crc kubenswrapper[4775]: E0123 14:08:34.727696 4775 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.770664 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.770780 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-serving-cert\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.770864 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.770913 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.855732 4775 generic.go:334] "Generic (PLEG): container finished" podID="3066d31d-92a4-45a7-b368-ba66d5689456" containerID="b55e2c335cddf1f1e9c9202e83c490ce85712c353fa0cf36a620dab99d97659f" exitCode=0 Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.856075 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" event={"ID":"3066d31d-92a4-45a7-b368-ba66d5689456","Type":"ContainerDied","Data":"b55e2c335cddf1f1e9c9202e83c490ce85712c353fa0cf36a620dab99d97659f"} Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.856265 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" event={"ID":"3066d31d-92a4-45a7-b368-ba66d5689456","Type":"ContainerDied","Data":"74f4cd2270219100871d3310c76c771eee7c27cb5f3b7f3244692cc8ce1e0535"} Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.856477 4775 scope.go:117] "RemoveContainer" containerID="b55e2c335cddf1f1e9c9202e83c490ce85712c353fa0cf36a620dab99d97659f" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.856747 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.858229 4775 status_manager.go:851] "Failed to get status for pod" podUID="b0d34b3f-ebda-4e48-82ec-36db9214c42a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.858729 4775 status_manager.go:851] "Failed to get status for pod" podUID="3066d31d-92a4-45a7-b368-ba66d5689456" pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-4q8mj\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.859533 4775 status_manager.go:851] "Failed to get status for pod" podUID="1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-f759bc488-r96ss\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.884840 4775 status_manager.go:851] "Failed to get status for pod" podUID="b0d34b3f-ebda-4e48-82ec-36db9214c42a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.885612 4775 status_manager.go:851] "Failed to get status for pod" podUID="1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-f759bc488-r96ss\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.886274 4775 status_manager.go:851] "Failed to get status for pod" podUID="3066d31d-92a4-45a7-b368-ba66d5689456" pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-4q8mj\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.890516 4775 scope.go:117] "RemoveContainer" containerID="b55e2c335cddf1f1e9c9202e83c490ce85712c353fa0cf36a620dab99d97659f" Jan 23 14:08:34 crc kubenswrapper[4775]: E0123 14:08:34.891257 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b55e2c335cddf1f1e9c9202e83c490ce85712c353fa0cf36a620dab99d97659f\": container with ID starting with b55e2c335cddf1f1e9c9202e83c490ce85712c353fa0cf36a620dab99d97659f not found: ID does not exist" containerID="b55e2c335cddf1f1e9c9202e83c490ce85712c353fa0cf36a620dab99d97659f" Jan 23 14:08:34 crc kubenswrapper[4775]: I0123 14:08:34.891309 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b55e2c335cddf1f1e9c9202e83c490ce85712c353fa0cf36a620dab99d97659f"} err="failed to get container status \"b55e2c335cddf1f1e9c9202e83c490ce85712c353fa0cf36a620dab99d97659f\": rpc error: code = NotFound desc = could not find container \"b55e2c335cddf1f1e9c9202e83c490ce85712c353fa0cf36a620dab99d97659f\": container with ID starting with b55e2c335cddf1f1e9c9202e83c490ce85712c353fa0cf36a620dab99d97659f not found: ID does not exist" Jan 23 14:08:35 crc kubenswrapper[4775]: W0123 14:08:35.724882 4775 reflector.go:561] object-"openshift-controller-manager"/"client-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:35 crc kubenswrapper[4775]: E0123 14:08:35.726468 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:35 crc kubenswrapper[4775]: E0123 14:08:35.728147 4775 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:35 crc kubenswrapper[4775]: E0123 14:08:35.728208 4775 projected.go:194] Error preparing data for projected volume kube-api-access-vr2rr for pod openshift-controller-manager/controller-manager-f759bc488-r96ss: [failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token": dial tcp 38.102.83.177:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Jan 23 14:08:35 crc kubenswrapper[4775]: E0123 14:08:35.728322 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-kube-api-access-vr2rr podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:37.728286132 +0000 UTC m=+264.723114912 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-vr2rr" (UniqueName: "kubernetes.io/projected/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-kube-api-access-vr2rr") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : [failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token": dial tcp 38.102.83.177:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Jan 23 14:08:35 crc kubenswrapper[4775]: E0123 14:08:35.771889 4775 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:35 crc kubenswrapper[4775]: E0123 14:08:35.771937 4775 secret.go:188] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 23 14:08:35 crc kubenswrapper[4775]: E0123 14:08:35.772006 4775 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:35 crc kubenswrapper[4775]: E0123 14:08:35.772095 4775 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:35 crc kubenswrapper[4775]: E0123 14:08:35.772036 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:39.772006124 +0000 UTC m=+266.766834904 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:35 crc kubenswrapper[4775]: E0123 14:08:35.772199 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-serving-cert podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:39.772155119 +0000 UTC m=+266.766983919 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-serving-cert") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync secret cache: timed out waiting for the condition Jan 23 14:08:35 crc kubenswrapper[4775]: E0123 14:08:35.772257 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:39.772236562 +0000 UTC m=+266.767065392 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:35 crc kubenswrapper[4775]: E0123 14:08:35.772299 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:39.772285543 +0000 UTC m=+266.767114323 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:35 crc kubenswrapper[4775]: I0123 14:08:35.784644 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wfb4\" (UniqueName: \"kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4\") pod \"route-controller-manager-544cdfc94f-mdfkq\" (UID: \"ff5caa98-bd54-485f-a11e-46a25c98f82f\") " pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:08:35 crc kubenswrapper[4775]: E0123 14:08:35.785762 4775 projected.go:194] Error preparing data for projected volume kube-api-access-2wfb4 for pod openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq: failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:35 crc kubenswrapper[4775]: E0123 14:08:35.785956 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4 podName:ff5caa98-bd54-485f-a11e-46a25c98f82f nodeName:}" failed. No retries permitted until 2026-01-23 14:08:43.785925805 +0000 UTC m=+270.780754585 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-2wfb4" (UniqueName: "kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4") pod "route-controller-manager-544cdfc94f-mdfkq" (UID: "ff5caa98-bd54-485f-a11e-46a25c98f82f") : failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:36 crc kubenswrapper[4775]: W0123 14:08:36.335257 4775 reflector.go:561] object-"openshift-controller-manager"/"openshift-global-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-global-ca&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:36 crc kubenswrapper[4775]: E0123 14:08:36.335365 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-global-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-global-ca&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:36 crc kubenswrapper[4775]: W0123 14:08:36.665949 4775 reflector.go:561] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-sa-dockercfg-msq4c&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:36 crc kubenswrapper[4775]: E0123 14:08:36.666092 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-msq4c\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-sa-dockercfg-msq4c&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:36 crc kubenswrapper[4775]: W0123 14:08:36.743883 4775 reflector.go:561] object-"openshift-controller-manager"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:36 crc kubenswrapper[4775]: E0123 14:08:36.744045 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:36 crc kubenswrapper[4775]: W0123 14:08:36.789348 4775 reflector.go:561] object-"openshift-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:36 crc kubenswrapper[4775]: E0123 14:08:36.789655 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:36 crc kubenswrapper[4775]: W0123 14:08:36.858269 4775 reflector.go:561] object-"openshift-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:36 crc kubenswrapper[4775]: E0123 14:08:36.858389 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:37 crc kubenswrapper[4775]: E0123 14:08:37.004761 4775 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="6.4s" Jan 23 14:08:37 crc kubenswrapper[4775]: W0123 14:08:37.255223 4775 reflector.go:561] object-"openshift-controller-manager"/"config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:37 crc kubenswrapper[4775]: E0123 14:08:37.255359 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:37 crc kubenswrapper[4775]: I0123 14:08:37.732978 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr2rr\" (UniqueName: \"kubernetes.io/projected/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-kube-api-access-vr2rr\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:38 crc kubenswrapper[4775]: E0123 14:08:38.734561 4775 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:39 crc kubenswrapper[4775]: E0123 14:08:39.735150 4775 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:39 crc kubenswrapper[4775]: E0123 14:08:39.735195 4775 projected.go:194] Error preparing data for projected volume kube-api-access-vr2rr for pod openshift-controller-manager/controller-manager-f759bc488-r96ss: [failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token": dial tcp 38.102.83.177:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Jan 23 14:08:39 crc kubenswrapper[4775]: E0123 14:08:39.735285 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-kube-api-access-vr2rr podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:43.735260871 +0000 UTC m=+270.730089611 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-vr2rr" (UniqueName: "kubernetes.io/projected/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-kube-api-access-vr2rr") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : [failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token": dial tcp 38.102.83.177:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Jan 23 14:08:39 crc kubenswrapper[4775]: E0123 14:08:39.790887 4775 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/events\": dial tcp 38.102.83.177:6443: connect: connection refused" event="&Event{ObjectMeta:{route-controller-manager-544cdfc94f-mdfkq.188d616364dfaafd openshift-route-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-route-controller-manager,Name:route-controller-manager-544cdfc94f-mdfkq,UID:ff5caa98-bd54-485f-a11e-46a25c98f82f,APIVersion:v1,ResourceVersion:29848,FieldPath:,},Reason:FailedMount,Message:MountVolume.SetUp failed for volume \"kube-api-access-2wfb4\" : failed to fetch token: Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token\": dial tcp 38.102.83.177:6443: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 14:08:28.148509437 +0000 UTC m=+255.143338177,LastTimestamp:2026-01-23 14:08:28.148509437 +0000 UTC m=+255.143338177,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 14:08:39 crc kubenswrapper[4775]: I0123 14:08:39.859461 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:39 crc kubenswrapper[4775]: I0123 14:08:39.859781 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:39 crc kubenswrapper[4775]: I0123 14:08:39.860013 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:39 crc kubenswrapper[4775]: I0123 14:08:39.860171 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-serving-cert\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:40 crc kubenswrapper[4775]: W0123 14:08:40.344775 4775 reflector.go:561] object-"openshift-controller-manager"/"client-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:40 crc kubenswrapper[4775]: E0123 14:08:40.344893 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dclient-ca&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:40 crc kubenswrapper[4775]: W0123 14:08:40.601025 4775 reflector.go:561] object-"openshift-controller-manager"/"openshift-global-ca": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-global-ca&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:40 crc kubenswrapper[4775]: E0123 14:08:40.601455 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-global-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-global-ca&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:40 crc kubenswrapper[4775]: W0123 14:08:40.761750 4775 reflector.go:561] object-"openshift-controller-manager"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:40 crc kubenswrapper[4775]: E0123 14:08:40.761933 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:40 crc kubenswrapper[4775]: E0123 14:08:40.859955 4775 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:40 crc kubenswrapper[4775]: E0123 14:08:40.860098 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:48.860066751 +0000 UTC m=+275.854895531 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:40 crc kubenswrapper[4775]: E0123 14:08:40.860224 4775 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:40 crc kubenswrapper[4775]: E0123 14:08:40.860253 4775 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:40 crc kubenswrapper[4775]: E0123 14:08:40.860274 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:48.860261577 +0000 UTC m=+275.855090317 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:40 crc kubenswrapper[4775]: E0123 14:08:40.860296 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:48.860283217 +0000 UTC m=+275.855111987 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:40 crc kubenswrapper[4775]: E0123 14:08:40.860985 4775 secret.go:188] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 23 14:08:40 crc kubenswrapper[4775]: E0123 14:08:40.861039 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-serving-cert podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:48.86102671 +0000 UTC m=+275.855855450 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-serving-cert") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync secret cache: timed out waiting for the condition Jan 23 14:08:40 crc kubenswrapper[4775]: W0123 14:08:40.869749 4775 reflector.go:561] object-"openshift-controller-manager"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:40 crc kubenswrapper[4775]: E0123 14:08:40.869909 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:41 crc kubenswrapper[4775]: W0123 14:08:41.078758 4775 reflector.go:561] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-sa-dockercfg-msq4c&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:41 crc kubenswrapper[4775]: E0123 14:08:41.078896 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-msq4c\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dopenshift-controller-manager-sa-dockercfg-msq4c&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:41 crc kubenswrapper[4775]: W0123 14:08:41.572297 4775 reflector.go:561] object-"openshift-controller-manager"/"config": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:41 crc kubenswrapper[4775]: E0123 14:08:41.572416 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dconfig&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:41 crc kubenswrapper[4775]: W0123 14:08:41.852944 4775 reflector.go:561] object-"openshift-controller-manager"/"serving-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:41 crc kubenswrapper[4775]: E0123 14:08:41.853070 4775 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0\": dial tcp 38.102.83.177:6443: connect: connection refused" logger="UnhandledError" Jan 23 14:08:42 crc kubenswrapper[4775]: I0123 14:08:42.713774 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:08:42 crc kubenswrapper[4775]: I0123 14:08:42.714768 4775 status_manager.go:851] "Failed to get status for pod" podUID="b0d34b3f-ebda-4e48-82ec-36db9214c42a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:42 crc kubenswrapper[4775]: I0123 14:08:42.715375 4775 status_manager.go:851] "Failed to get status for pod" podUID="3066d31d-92a4-45a7-b368-ba66d5689456" pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-4q8mj\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:42 crc kubenswrapper[4775]: I0123 14:08:42.715744 4775 status_manager.go:851] "Failed to get status for pod" podUID="1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-f759bc488-r96ss\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:42 crc kubenswrapper[4775]: I0123 14:08:42.736435 4775 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0977f59d-f8ab-406f-adf0-f3ac44424242" Jan 23 14:08:42 crc kubenswrapper[4775]: I0123 14:08:42.736486 4775 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0977f59d-f8ab-406f-adf0-f3ac44424242" Jan 23 14:08:42 crc kubenswrapper[4775]: E0123 14:08:42.737140 4775 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:08:42 crc kubenswrapper[4775]: I0123 14:08:42.737701 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:08:42 crc kubenswrapper[4775]: I0123 14:08:42.905989 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1189ac241392d12ae197de28172c1eb38c5e4b0c799568b801ea1bd502836315"} Jan 23 14:08:42 crc kubenswrapper[4775]: I0123 14:08:42.908861 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 23 14:08:42 crc kubenswrapper[4775]: I0123 14:08:42.908909 4775 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212" exitCode=1 Jan 23 14:08:42 crc kubenswrapper[4775]: I0123 14:08:42.908938 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212"} Jan 23 14:08:42 crc kubenswrapper[4775]: I0123 14:08:42.909380 4775 scope.go:117] "RemoveContainer" containerID="0bba717426c4314a10133649bc790fcf0676931e6874382722627d4ed35fd212" Jan 23 14:08:42 crc kubenswrapper[4775]: I0123 14:08:42.909729 4775 status_manager.go:851] "Failed to get status for pod" podUID="b0d34b3f-ebda-4e48-82ec-36db9214c42a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:42 crc kubenswrapper[4775]: I0123 14:08:42.910221 4775 status_manager.go:851] "Failed to get status for pod" podUID="3066d31d-92a4-45a7-b368-ba66d5689456" pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-4q8mj\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:42 crc kubenswrapper[4775]: I0123 14:08:42.910621 4775 status_manager.go:851] "Failed to get status for pod" podUID="1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-f759bc488-r96ss\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:42 crc kubenswrapper[4775]: I0123 14:08:42.910887 4775 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:43 crc kubenswrapper[4775]: E0123 14:08:43.406205 4775 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.177:6443: connect: connection refused" interval="7s" Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.717853 4775 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.718650 4775 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.719369 4775 status_manager.go:851] "Failed to get status for pod" podUID="b0d34b3f-ebda-4e48-82ec-36db9214c42a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.719843 4775 status_manager.go:851] "Failed to get status for pod" podUID="1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-f759bc488-r96ss\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.720223 4775 status_manager.go:851] "Failed to get status for pod" podUID="3066d31d-92a4-45a7-b368-ba66d5689456" pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-4q8mj\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.811616 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr2rr\" (UniqueName: \"kubernetes.io/projected/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-kube-api-access-vr2rr\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.811693 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wfb4\" (UniqueName: \"kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4\") pod \"route-controller-manager-544cdfc94f-mdfkq\" (UID: \"ff5caa98-bd54-485f-a11e-46a25c98f82f\") " pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:08:43 crc kubenswrapper[4775]: E0123 14:08:43.812507 4775 projected.go:194] Error preparing data for projected volume kube-api-access-2wfb4 for pod openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq: failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:43 crc kubenswrapper[4775]: E0123 14:08:43.812602 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4 podName:ff5caa98-bd54-485f-a11e-46a25c98f82f nodeName:}" failed. No retries permitted until 2026-01-23 14:08:59.812580097 +0000 UTC m=+286.807408877 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-2wfb4" (UniqueName: "kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4") pod "route-controller-manager-544cdfc94f-mdfkq" (UID: "ff5caa98-bd54-485f-a11e-46a25c98f82f") : failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/serviceaccounts/route-controller-manager-sa/token": dial tcp 38.102.83.177:6443: connect: connection refused Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.922960 4775 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="66779cb0bfcfce756fbae36ed1bca9e0efea301f100ab3fd85127a0ec86aa8d5" exitCode=0 Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.923071 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"66779cb0bfcfce756fbae36ed1bca9e0efea301f100ab3fd85127a0ec86aa8d5"} Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.924219 4775 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0977f59d-f8ab-406f-adf0-f3ac44424242" Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.924261 4775 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0977f59d-f8ab-406f-adf0-f3ac44424242" Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.924564 4775 status_manager.go:851] "Failed to get status for pod" podUID="b0d34b3f-ebda-4e48-82ec-36db9214c42a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.925260 4775 status_manager.go:851] "Failed to get status for pod" podUID="3066d31d-92a4-45a7-b368-ba66d5689456" pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-4q8mj\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:43 crc kubenswrapper[4775]: E0123 14:08:43.925280 4775 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.926111 4775 status_manager.go:851] "Failed to get status for pod" podUID="1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-f759bc488-r96ss\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.926610 4775 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.927292 4775 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.928170 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.928262 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6b821d7f72df334480c68b0f88ce737d26860c6c898f513dd696f37f929188b3"} Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.929149 4775 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.929417 4775 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.929860 4775 status_manager.go:851] "Failed to get status for pod" podUID="b0d34b3f-ebda-4e48-82ec-36db9214c42a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.930543 4775 status_manager.go:851] "Failed to get status for pod" podUID="3066d31d-92a4-45a7-b368-ba66d5689456" pod="openshift-authentication/oauth-openshift-558db77b4-4q8mj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-4q8mj\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:43 crc kubenswrapper[4775]: I0123 14:08:43.930828 4775 status_manager.go:851] "Failed to get status for pod" podUID="1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-f759bc488-r96ss\": dial tcp 38.102.83.177:6443: connect: connection refused" Jan 23 14:08:44 crc kubenswrapper[4775]: E0123 14:08:44.813641 4775 projected.go:288] Couldn't get configMap openshift-controller-manager/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:44 crc kubenswrapper[4775]: I0123 14:08:44.945603 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"675bf5e9c2285f4e55481cd8056521b35be83ea39288e07aff2fd527fe10f7a1"} Jan 23 14:08:44 crc kubenswrapper[4775]: I0123 14:08:44.945666 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"586bdbeab6277db5db805c44f7257af5857c0dd7442ba238cc0d7d596fa68408"} Jan 23 14:08:44 crc kubenswrapper[4775]: I0123 14:08:44.945681 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"944376e72a42cc14e53fd0437f6c53ef4e33d4f1a54304b9cc93f1759403fb1d"} Jan 23 14:08:45 crc kubenswrapper[4775]: E0123 14:08:45.814257 4775 projected.go:288] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:45 crc kubenswrapper[4775]: E0123 14:08:45.814565 4775 projected.go:194] Error preparing data for projected volume kube-api-access-vr2rr for pod openshift-controller-manager/controller-manager-f759bc488-r96ss: [failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token": dial tcp 38.102.83.177:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Jan 23 14:08:45 crc kubenswrapper[4775]: E0123 14:08:45.814877 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-kube-api-access-vr2rr podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:08:53.814701668 +0000 UTC m=+280.809530418 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-vr2rr" (UniqueName: "kubernetes.io/projected/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-kube-api-access-vr2rr") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : [failed to fetch token: Post "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token": dial tcp 38.102.83.177:6443: connect: connection refused, failed to sync configmap cache: timed out waiting for the condition] Jan 23 14:08:45 crc kubenswrapper[4775]: I0123 14:08:45.953860 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a976a0c3ce6f4391d8547e6d1bc358159da2b23c5b78cfe8cc79035713150b99"} Jan 23 14:08:45 crc kubenswrapper[4775]: I0123 14:08:45.954863 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9154c049e861c18ffcf33cb9af0054b67e678116f8c03aa8cd12ae8d5332a838"} Jan 23 14:08:45 crc kubenswrapper[4775]: I0123 14:08:45.955029 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:08:45 crc kubenswrapper[4775]: I0123 14:08:45.954303 4775 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0977f59d-f8ab-406f-adf0-f3ac44424242" Jan 23 14:08:45 crc kubenswrapper[4775]: I0123 14:08:45.955231 4775 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0977f59d-f8ab-406f-adf0-f3ac44424242" Jan 23 14:08:47 crc kubenswrapper[4775]: I0123 14:08:47.020751 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 14:08:47 crc kubenswrapper[4775]: I0123 14:08:47.025964 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 14:08:47 crc kubenswrapper[4775]: I0123 14:08:47.738634 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:08:47 crc kubenswrapper[4775]: I0123 14:08:47.739063 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:08:47 crc kubenswrapper[4775]: I0123 14:08:47.744409 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:08:47 crc kubenswrapper[4775]: I0123 14:08:47.971146 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 14:08:48 crc kubenswrapper[4775]: I0123 14:08:48.847211 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 14:08:48 crc kubenswrapper[4775]: I0123 14:08:48.876504 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-serving-cert\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:48 crc kubenswrapper[4775]: I0123 14:08:48.876591 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:48 crc kubenswrapper[4775]: I0123 14:08:48.876645 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:48 crc kubenswrapper[4775]: I0123 14:08:48.876737 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:48 crc kubenswrapper[4775]: I0123 14:08:48.886666 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-serving-cert\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:49 crc kubenswrapper[4775]: E0123 14:08:49.877056 4775 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:49 crc kubenswrapper[4775]: E0123 14:08:49.877857 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:09:05.877796734 +0000 UTC m=+292.872625494 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:49 crc kubenswrapper[4775]: E0123 14:08:49.877935 4775 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:49 crc kubenswrapper[4775]: E0123 14:08:49.877992 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:09:05.877978469 +0000 UTC m=+292.872807229 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:49 crc kubenswrapper[4775]: E0123 14:08:49.877987 4775 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:49 crc kubenswrapper[4775]: E0123 14:08:49.878092 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config podName:1d63e87d-00e8-4acc-a3b7-7464f0ec0c83 nodeName:}" failed. No retries permitted until 2026-01-23 14:09:05.878067672 +0000 UTC m=+292.872896412 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config") pod "controller-manager-f759bc488-r96ss" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83") : failed to sync configmap cache: timed out waiting for the condition Jan 23 14:08:50 crc kubenswrapper[4775]: I0123 14:08:50.274415 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 14:08:50 crc kubenswrapper[4775]: I0123 14:08:50.685305 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 14:08:50 crc kubenswrapper[4775]: I0123 14:08:50.968361 4775 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:08:51 crc kubenswrapper[4775]: I0123 14:08:51.009608 4775 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f69835f3-89b6-4006-ab11-dcff693b4116" Jan 23 14:08:51 crc kubenswrapper[4775]: I0123 14:08:51.407833 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 14:08:51 crc kubenswrapper[4775]: I0123 14:08:51.991227 4775 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0977f59d-f8ab-406f-adf0-f3ac44424242" Jan 23 14:08:51 crc kubenswrapper[4775]: I0123 14:08:51.991521 4775 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0977f59d-f8ab-406f-adf0-f3ac44424242" Jan 23 14:08:51 crc kubenswrapper[4775]: I0123 14:08:51.995300 4775 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f69835f3-89b6-4006-ab11-dcff693b4116" Jan 23 14:08:51 crc kubenswrapper[4775]: I0123 14:08:51.996029 4775 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://944376e72a42cc14e53fd0437f6c53ef4e33d4f1a54304b9cc93f1759403fb1d" Jan 23 14:08:51 crc kubenswrapper[4775]: I0123 14:08:51.996054 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:08:52 crc kubenswrapper[4775]: I0123 14:08:52.010890 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 14:08:53 crc kubenswrapper[4775]: I0123 14:08:53.000438 4775 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0977f59d-f8ab-406f-adf0-f3ac44424242" Jan 23 14:08:53 crc kubenswrapper[4775]: I0123 14:08:53.000511 4775 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0977f59d-f8ab-406f-adf0-f3ac44424242" Jan 23 14:08:53 crc kubenswrapper[4775]: I0123 14:08:53.005630 4775 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f69835f3-89b6-4006-ab11-dcff693b4116" Jan 23 14:08:53 crc kubenswrapper[4775]: I0123 14:08:53.348889 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 14:08:53 crc kubenswrapper[4775]: I0123 14:08:53.841764 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr2rr\" (UniqueName: \"kubernetes.io/projected/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-kube-api-access-vr2rr\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:53 crc kubenswrapper[4775]: I0123 14:08:53.877581 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr2rr\" (UniqueName: \"kubernetes.io/projected/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-kube-api-access-vr2rr\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:08:54 crc kubenswrapper[4775]: I0123 14:08:54.269304 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 14:08:59 crc kubenswrapper[4775]: I0123 14:08:59.826560 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wfb4\" (UniqueName: \"kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4\") pod \"route-controller-manager-544cdfc94f-mdfkq\" (UID: \"ff5caa98-bd54-485f-a11e-46a25c98f82f\") " pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:08:59 crc kubenswrapper[4775]: I0123 14:08:59.852197 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wfb4\" (UniqueName: \"kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4\") pod \"route-controller-manager-544cdfc94f-mdfkq\" (UID: \"ff5caa98-bd54-485f-a11e-46a25c98f82f\") " pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:08:59 crc kubenswrapper[4775]: I0123 14:08:59.884970 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:09:00 crc kubenswrapper[4775]: W0123 14:09:00.094937 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff5caa98_bd54_485f_a11e_46a25c98f82f.slice/crio-b411bfdbc4445453b98beff52c995d60e91303d316b63ebd7869ac7d9567858a WatchSource:0}: Error finding container b411bfdbc4445453b98beff52c995d60e91303d316b63ebd7869ac7d9567858a: Status 404 returned error can't find the container with id b411bfdbc4445453b98beff52c995d60e91303d316b63ebd7869ac7d9567858a Jan 23 14:09:00 crc kubenswrapper[4775]: I0123 14:09:00.616010 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 23 14:09:00 crc kubenswrapper[4775]: I0123 14:09:00.974502 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 14:09:01 crc kubenswrapper[4775]: I0123 14:09:01.056564 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" event={"ID":"ff5caa98-bd54-485f-a11e-46a25c98f82f","Type":"ContainerStarted","Data":"43a6dcab62e1108a909f51abe63c62d16a838878cd7dadce64232f1868dbd569"} Jan 23 14:09:01 crc kubenswrapper[4775]: I0123 14:09:01.056621 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" event={"ID":"ff5caa98-bd54-485f-a11e-46a25c98f82f","Type":"ContainerStarted","Data":"b411bfdbc4445453b98beff52c995d60e91303d316b63ebd7869ac7d9567858a"} Jan 23 14:09:01 crc kubenswrapper[4775]: I0123 14:09:01.056648 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:09:01 crc kubenswrapper[4775]: I0123 14:09:01.231665 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 23 14:09:01 crc kubenswrapper[4775]: I0123 14:09:01.581914 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 23 14:09:01 crc kubenswrapper[4775]: I0123 14:09:01.712165 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 23 14:09:01 crc kubenswrapper[4775]: I0123 14:09:01.746375 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 23 14:09:01 crc kubenswrapper[4775]: I0123 14:09:01.873792 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 23 14:09:02 crc kubenswrapper[4775]: I0123 14:09:02.011445 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 23 14:09:02 crc kubenswrapper[4775]: I0123 14:09:02.033376 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 23 14:09:02 crc kubenswrapper[4775]: I0123 14:09:02.056162 4775 patch_prober.go:28] interesting pod/route-controller-manager-544cdfc94f-mdfkq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 14:09:02 crc kubenswrapper[4775]: I0123 14:09:02.056312 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" podUID="ff5caa98-bd54-485f-a11e-46a25c98f82f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 14:09:02 crc kubenswrapper[4775]: I0123 14:09:02.185960 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 23 14:09:02 crc kubenswrapper[4775]: I0123 14:09:02.256563 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 23 14:09:02 crc kubenswrapper[4775]: I0123 14:09:02.394465 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 23 14:09:02 crc kubenswrapper[4775]: I0123 14:09:02.481392 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 23 14:09:02 crc kubenswrapper[4775]: I0123 14:09:02.516686 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 23 14:09:02 crc kubenswrapper[4775]: I0123 14:09:02.747985 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 23 14:09:02 crc kubenswrapper[4775]: I0123 14:09:02.974360 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 23 14:09:03 crc kubenswrapper[4775]: I0123 14:09:03.061542 4775 patch_prober.go:28] interesting pod/route-controller-manager-544cdfc94f-mdfkq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 14:09:03 crc kubenswrapper[4775]: I0123 14:09:03.061643 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" podUID="ff5caa98-bd54-485f-a11e-46a25c98f82f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:09:03 crc kubenswrapper[4775]: I0123 14:09:03.210997 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 23 14:09:03 crc kubenswrapper[4775]: I0123 14:09:03.409895 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 23 14:09:03 crc kubenswrapper[4775]: I0123 14:09:03.562025 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 23 14:09:03 crc kubenswrapper[4775]: I0123 14:09:03.720122 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 23 14:09:03 crc kubenswrapper[4775]: I0123 14:09:03.870010 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 23 14:09:03 crc kubenswrapper[4775]: I0123 14:09:03.870186 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 23 14:09:03 crc kubenswrapper[4775]: I0123 14:09:03.990386 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.060605 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.088779 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.101723 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.152148 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.221109 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.230511 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.255722 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.297510 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.381231 4775 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.402764 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.412253 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.414184 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.439675 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.469886 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.544463 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.678519 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.694243 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.698073 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.709324 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.751595 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.779694 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.798125 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.808267 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.856369 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.924058 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.930726 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.971892 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 23 14:09:04 crc kubenswrapper[4775]: I0123 14:09:04.994988 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.031981 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.082998 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.110114 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.201314 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.218844 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.287947 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.302636 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.311653 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.345912 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.419538 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.420300 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.476849 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.570674 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.607308 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.796434 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.812312 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.813069 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.831982 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.940042 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.940110 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.940162 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.941138 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.941384 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.942133 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config\") pod \"controller-manager-f759bc488-r96ss\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:09:05 crc kubenswrapper[4775]: I0123 14:09:05.992013 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.010626 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.074126 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.082321 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.101177 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.164165 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.197044 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.201670 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.244500 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.262877 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.317449 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.345355 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.366958 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.369540 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.412263 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.416827 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.457084 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.495167 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.524048 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.554735 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.609989 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.766852 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.772246 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.831225 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.848956 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.870411 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 23 14:09:06 crc kubenswrapper[4775]: I0123 14:09:06.970235 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 23 14:09:07 crc kubenswrapper[4775]: I0123 14:09:07.004856 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 23 14:09:07 crc kubenswrapper[4775]: I0123 14:09:07.064889 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 23 14:09:07 crc kubenswrapper[4775]: I0123 14:09:07.090258 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" event={"ID":"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83","Type":"ContainerStarted","Data":"653cbfc156c37e7a5562d09f0da4132ef85fffcbbfa7b0bb4dcb957ff881ec84"} Jan 23 14:09:07 crc kubenswrapper[4775]: I0123 14:09:07.090306 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" event={"ID":"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83","Type":"ContainerStarted","Data":"51613da25ddd6eb38c4ee47a22e6af21766feb4517f1baec6950dd90deefa0e9"} Jan 23 14:09:07 crc kubenswrapper[4775]: I0123 14:09:07.090537 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:09:07 crc kubenswrapper[4775]: I0123 14:09:07.096599 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:09:07 crc kubenswrapper[4775]: I0123 14:09:07.184987 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 23 14:09:07 crc kubenswrapper[4775]: I0123 14:09:07.199642 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 23 14:09:07 crc kubenswrapper[4775]: I0123 14:09:07.204868 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 23 14:09:07 crc kubenswrapper[4775]: I0123 14:09:07.299550 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 23 14:09:07 crc kubenswrapper[4775]: I0123 14:09:07.563624 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 23 14:09:07 crc kubenswrapper[4775]: I0123 14:09:07.826886 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 23 14:09:07 crc kubenswrapper[4775]: I0123 14:09:07.850142 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 23 14:09:07 crc kubenswrapper[4775]: I0123 14:09:07.862144 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 23 14:09:08 crc kubenswrapper[4775]: I0123 14:09:08.054603 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 23 14:09:08 crc kubenswrapper[4775]: I0123 14:09:08.126989 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 23 14:09:08 crc kubenswrapper[4775]: I0123 14:09:08.189591 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 23 14:09:08 crc kubenswrapper[4775]: I0123 14:09:08.194906 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 23 14:09:08 crc kubenswrapper[4775]: I0123 14:09:08.198595 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 23 14:09:08 crc kubenswrapper[4775]: I0123 14:09:08.362278 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 23 14:09:08 crc kubenswrapper[4775]: I0123 14:09:08.532343 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 23 14:09:08 crc kubenswrapper[4775]: I0123 14:09:08.588284 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 23 14:09:08 crc kubenswrapper[4775]: I0123 14:09:08.636768 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 23 14:09:08 crc kubenswrapper[4775]: I0123 14:09:08.711123 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 23 14:09:08 crc kubenswrapper[4775]: I0123 14:09:08.781047 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 23 14:09:08 crc kubenswrapper[4775]: I0123 14:09:08.904172 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 23 14:09:08 crc kubenswrapper[4775]: I0123 14:09:08.921400 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 23 14:09:08 crc kubenswrapper[4775]: I0123 14:09:08.922788 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.030491 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.066960 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.127678 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.158797 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.283217 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.304587 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.340558 4775 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.405612 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.489914 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.571601 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.588348 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.619659 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.664027 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.722757 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.784338 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.814364 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.826797 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.828379 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.834900 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.926383 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.940837 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.947426 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.948388 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 23 14:09:09 crc kubenswrapper[4775]: I0123 14:09:09.969924 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 23 14:09:10 crc kubenswrapper[4775]: I0123 14:09:10.141485 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 23 14:09:10 crc kubenswrapper[4775]: I0123 14:09:10.235469 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 23 14:09:10 crc kubenswrapper[4775]: I0123 14:09:10.302029 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 23 14:09:10 crc kubenswrapper[4775]: I0123 14:09:10.318847 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 23 14:09:10 crc kubenswrapper[4775]: I0123 14:09:10.326391 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 23 14:09:10 crc kubenswrapper[4775]: I0123 14:09:10.435983 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 23 14:09:10 crc kubenswrapper[4775]: I0123 14:09:10.478496 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 23 14:09:10 crc kubenswrapper[4775]: I0123 14:09:10.607304 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 23 14:09:10 crc kubenswrapper[4775]: I0123 14:09:10.684623 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 23 14:09:10 crc kubenswrapper[4775]: I0123 14:09:10.701391 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 23 14:09:10 crc kubenswrapper[4775]: I0123 14:09:10.863112 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 23 14:09:10 crc kubenswrapper[4775]: I0123 14:09:10.885876 4775 patch_prober.go:28] interesting pod/route-controller-manager-544cdfc94f-mdfkq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 14:09:10 crc kubenswrapper[4775]: I0123 14:09:10.885936 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" podUID="ff5caa98-bd54-485f-a11e-46a25c98f82f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 23 14:09:10 crc kubenswrapper[4775]: I0123 14:09:10.914139 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 23 14:09:10 crc kubenswrapper[4775]: I0123 14:09:10.987449 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 23 14:09:11 crc kubenswrapper[4775]: I0123 14:09:11.011094 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 23 14:09:11 crc kubenswrapper[4775]: I0123 14:09:11.051228 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 14:09:11 crc kubenswrapper[4775]: I0123 14:09:11.240056 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 23 14:09:11 crc kubenswrapper[4775]: I0123 14:09:11.281917 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 23 14:09:11 crc kubenswrapper[4775]: I0123 14:09:11.370389 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 23 14:09:11 crc kubenswrapper[4775]: I0123 14:09:11.377961 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 23 14:09:11 crc kubenswrapper[4775]: I0123 14:09:11.383211 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 14:09:11 crc kubenswrapper[4775]: I0123 14:09:11.406111 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 23 14:09:11 crc kubenswrapper[4775]: I0123 14:09:11.436343 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 23 14:09:11 crc kubenswrapper[4775]: I0123 14:09:11.443343 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 23 14:09:11 crc kubenswrapper[4775]: I0123 14:09:11.644292 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 23 14:09:11 crc kubenswrapper[4775]: I0123 14:09:11.678873 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 23 14:09:11 crc kubenswrapper[4775]: I0123 14:09:11.748616 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 23 14:09:11 crc kubenswrapper[4775]: I0123 14:09:11.751851 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 23 14:09:11 crc kubenswrapper[4775]: I0123 14:09:11.752749 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 23 14:09:11 crc kubenswrapper[4775]: I0123 14:09:11.807785 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 23 14:09:11 crc kubenswrapper[4775]: I0123 14:09:11.836561 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 23 14:09:11 crc kubenswrapper[4775]: I0123 14:09:11.870061 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 23 14:09:11 crc kubenswrapper[4775]: I0123 14:09:11.941291 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 23 14:09:11 crc kubenswrapper[4775]: I0123 14:09:11.956475 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 23 14:09:12 crc kubenswrapper[4775]: I0123 14:09:12.170558 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 23 14:09:12 crc kubenswrapper[4775]: I0123 14:09:12.217236 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 23 14:09:12 crc kubenswrapper[4775]: I0123 14:09:12.368156 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 23 14:09:12 crc kubenswrapper[4775]: I0123 14:09:12.419022 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 23 14:09:12 crc kubenswrapper[4775]: I0123 14:09:12.445956 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 23 14:09:12 crc kubenswrapper[4775]: I0123 14:09:12.446925 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 23 14:09:12 crc kubenswrapper[4775]: I0123 14:09:12.452458 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 14:09:12 crc kubenswrapper[4775]: I0123 14:09:12.536107 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 23 14:09:12 crc kubenswrapper[4775]: I0123 14:09:12.743203 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 23 14:09:12 crc kubenswrapper[4775]: I0123 14:09:12.811349 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 23 14:09:12 crc kubenswrapper[4775]: I0123 14:09:12.921142 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.126530 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.206760 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.233019 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.236615 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.277247 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.367413 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.468191 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.479011 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.487394 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.520018 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.523026 4775 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.546708 4775 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.698775 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.729976 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.751928 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.859996 4775 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.873321 4775 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.876643 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" podStartSLOduration=47.876558575 podStartE2EDuration="47.876558575s" podCreationTimestamp="2026-01-23 14:08:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:09:01.078106553 +0000 UTC m=+288.072935303" watchObservedRunningTime="2026-01-23 14:09:13.876558575 +0000 UTC m=+300.871387345" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.877346 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" podStartSLOduration=47.877333849 podStartE2EDuration="47.877333849s" podCreationTimestamp="2026-01-23 14:08:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:09:07.108377679 +0000 UTC m=+294.103206479" watchObservedRunningTime="2026-01-23 14:09:13.877333849 +0000 UTC m=+300.872162619" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.881034 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-4q8mj","openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.881108 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6f866778cb-dv6wd","openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 14:09:13 crc kubenswrapper[4775]: E0123 14:09:13.881368 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0d34b3f-ebda-4e48-82ec-36db9214c42a" containerName="installer" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.881395 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0d34b3f-ebda-4e48-82ec-36db9214c42a" containerName="installer" Jan 23 14:09:13 crc kubenswrapper[4775]: E0123 14:09:13.881412 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3066d31d-92a4-45a7-b368-ba66d5689456" containerName="oauth-openshift" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.881426 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="3066d31d-92a4-45a7-b368-ba66d5689456" containerName="oauth-openshift" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.881569 4775 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0977f59d-f8ab-406f-adf0-f3ac44424242" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.881595 4775 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="0977f59d-f8ab-406f-adf0-f3ac44424242" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.881599 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="3066d31d-92a4-45a7-b368-ba66d5689456" containerName="oauth-openshift" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.881618 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0d34b3f-ebda-4e48-82ec-36db9214c42a" containerName="installer" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.882085 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq","openshift-controller-manager/controller-manager-f759bc488-r96ss"] Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.882362 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.885545 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.887076 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.887222 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.887447 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.887082 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.888151 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.888787 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.891152 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.891981 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.892535 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.892630 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.892650 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.892764 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.894122 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.894543 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.905961 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.906467 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.908259 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.915250 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.952273 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=23.952242996 podStartE2EDuration="23.952242996s" podCreationTimestamp="2026-01-23 14:08:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:09:13.941396311 +0000 UTC m=+300.936225051" watchObservedRunningTime="2026-01-23 14:09:13.952242996 +0000 UTC m=+300.947071766" Jan 23 14:09:13 crc kubenswrapper[4775]: I0123 14:09:13.992292 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.033781 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.042352 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.042404 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-user-template-login\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.042424 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.042444 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e9989b1f-b602-41d4-b2be-9db936737e34-audit-dir\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.042461 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.042481 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-user-template-error\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.042498 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shjkq\" (UniqueName: \"kubernetes.io/projected/e9989b1f-b602-41d4-b2be-9db936737e34-kube-api-access-shjkq\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.042705 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-system-router-certs\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.042792 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-system-session\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.042836 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-system-service-ca\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.042886 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.042908 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.042927 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e9989b1f-b602-41d4-b2be-9db936737e34-audit-policies\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.042955 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.144273 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.144372 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.144431 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-user-template-login\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.144474 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.144514 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e9989b1f-b602-41d4-b2be-9db936737e34-audit-dir\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.144608 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.144652 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-user-template-error\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.144694 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shjkq\" (UniqueName: \"kubernetes.io/projected/e9989b1f-b602-41d4-b2be-9db936737e34-kube-api-access-shjkq\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.144730 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-system-router-certs\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.144834 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-system-session\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.144869 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-system-service-ca\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.144919 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.145101 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.145143 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e9989b1f-b602-41d4-b2be-9db936737e34-audit-policies\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.145512 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.145689 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-system-service-ca\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.144684 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e9989b1f-b602-41d4-b2be-9db936737e34-audit-dir\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.146267 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.147690 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e9989b1f-b602-41d4-b2be-9db936737e34-audit-policies\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.150996 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.151104 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-system-session\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.151423 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.151428 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.152068 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-system-router-certs\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.152497 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.153426 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-user-template-login\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.161010 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/e9989b1f-b602-41d4-b2be-9db936737e34-v4-0-config-user-template-error\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.172793 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shjkq\" (UniqueName: \"kubernetes.io/projected/e9989b1f-b602-41d4-b2be-9db936737e34-kube-api-access-shjkq\") pod \"oauth-openshift-6f866778cb-dv6wd\" (UID: \"e9989b1f-b602-41d4-b2be-9db936737e34\") " pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.207965 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.513176 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.626277 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.654648 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6f866778cb-dv6wd"] Jan 23 14:09:14 crc kubenswrapper[4775]: I0123 14:09:14.971158 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 23 14:09:15 crc kubenswrapper[4775]: I0123 14:09:15.113174 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 23 14:09:15 crc kubenswrapper[4775]: I0123 14:09:15.140748 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" event={"ID":"e9989b1f-b602-41d4-b2be-9db936737e34","Type":"ContainerStarted","Data":"48f29e602a852e0ea0d277991522d8fa604cb0c43d918086a467b47c29d09db7"} Jan 23 14:09:15 crc kubenswrapper[4775]: I0123 14:09:15.364924 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 23 14:09:15 crc kubenswrapper[4775]: I0123 14:09:15.375360 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 23 14:09:15 crc kubenswrapper[4775]: I0123 14:09:15.431837 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 23 14:09:15 crc kubenswrapper[4775]: I0123 14:09:15.472725 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 23 14:09:15 crc kubenswrapper[4775]: I0123 14:09:15.724401 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3066d31d-92a4-45a7-b368-ba66d5689456" path="/var/lib/kubelet/pods/3066d31d-92a4-45a7-b368-ba66d5689456/volumes" Jan 23 14:09:15 crc kubenswrapper[4775]: I0123 14:09:15.776584 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 23 14:09:15 crc kubenswrapper[4775]: I0123 14:09:15.911079 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 23 14:09:16 crc kubenswrapper[4775]: I0123 14:09:16.047164 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 23 14:09:16 crc kubenswrapper[4775]: I0123 14:09:16.091898 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 23 14:09:16 crc kubenswrapper[4775]: I0123 14:09:16.121767 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 23 14:09:16 crc kubenswrapper[4775]: I0123 14:09:16.149792 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" event={"ID":"e9989b1f-b602-41d4-b2be-9db936737e34","Type":"ContainerStarted","Data":"8d30f5526d5ea1b5e6adb26cb88f4bfb1e261e90a48b788817ed1f9806d76525"} Jan 23 14:09:16 crc kubenswrapper[4775]: I0123 14:09:16.150176 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:16 crc kubenswrapper[4775]: I0123 14:09:16.157533 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" Jan 23 14:09:16 crc kubenswrapper[4775]: I0123 14:09:16.186115 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6f866778cb-dv6wd" podStartSLOduration=68.186084178 podStartE2EDuration="1m8.186084178s" podCreationTimestamp="2026-01-23 14:08:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:09:16.181412494 +0000 UTC m=+303.176241274" watchObservedRunningTime="2026-01-23 14:09:16.186084178 +0000 UTC m=+303.180912928" Jan 23 14:09:16 crc kubenswrapper[4775]: I0123 14:09:16.407997 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 23 14:09:16 crc kubenswrapper[4775]: I0123 14:09:16.460563 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 23 14:09:16 crc kubenswrapper[4775]: I0123 14:09:16.483965 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 23 14:09:16 crc kubenswrapper[4775]: I0123 14:09:16.545036 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 23 14:09:16 crc kubenswrapper[4775]: I0123 14:09:16.561437 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 23 14:09:16 crc kubenswrapper[4775]: I0123 14:09:16.595604 4775 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 23 14:09:16 crc kubenswrapper[4775]: I0123 14:09:16.623637 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 23 14:09:17 crc kubenswrapper[4775]: I0123 14:09:17.054707 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 23 14:09:24 crc kubenswrapper[4775]: I0123 14:09:24.960965 4775 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 23 14:09:24 crc kubenswrapper[4775]: I0123 14:09:24.962550 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://aff268ac61a1e94757e586a2d154e2ae45702e5030a24a5cd4532578fe0a281b" gracePeriod=5 Jan 23 14:09:26 crc kubenswrapper[4775]: I0123 14:09:26.577997 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f759bc488-r96ss"] Jan 23 14:09:26 crc kubenswrapper[4775]: I0123 14:09:26.578875 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" podUID="1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" containerName="controller-manager" containerID="cri-o://653cbfc156c37e7a5562d09f0da4132ef85fffcbbfa7b0bb4dcb957ff881ec84" gracePeriod=30 Jan 23 14:09:26 crc kubenswrapper[4775]: I0123 14:09:26.668978 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq"] Jan 23 14:09:26 crc kubenswrapper[4775]: I0123 14:09:26.669240 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" podUID="ff5caa98-bd54-485f-a11e-46a25c98f82f" containerName="route-controller-manager" containerID="cri-o://43a6dcab62e1108a909f51abe63c62d16a838878cd7dadce64232f1868dbd569" gracePeriod=30 Jan 23 14:09:26 crc kubenswrapper[4775]: I0123 14:09:26.945650 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.010475 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.019413 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff5caa98-bd54-485f-a11e-46a25c98f82f-client-ca\") pod \"ff5caa98-bd54-485f-a11e-46a25c98f82f\" (UID: \"ff5caa98-bd54-485f-a11e-46a25c98f82f\") " Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.019480 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vr2rr\" (UniqueName: \"kubernetes.io/projected/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-kube-api-access-vr2rr\") pod \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.019510 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wfb4\" (UniqueName: \"kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4\") pod \"ff5caa98-bd54-485f-a11e-46a25c98f82f\" (UID: \"ff5caa98-bd54-485f-a11e-46a25c98f82f\") " Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.019534 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff5caa98-bd54-485f-a11e-46a25c98f82f-serving-cert\") pod \"ff5caa98-bd54-485f-a11e-46a25c98f82f\" (UID: \"ff5caa98-bd54-485f-a11e-46a25c98f82f\") " Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.019552 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5caa98-bd54-485f-a11e-46a25c98f82f-config\") pod \"ff5caa98-bd54-485f-a11e-46a25c98f82f\" (UID: \"ff5caa98-bd54-485f-a11e-46a25c98f82f\") " Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.019568 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config\") pod \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.019590 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles\") pod \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.019606 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca\") pod \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.019625 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-serving-cert\") pod \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\" (UID: \"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83\") " Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.020980 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config" (OuterVolumeSpecName: "config") pod "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.021789 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff5caa98-bd54-485f-a11e-46a25c98f82f-client-ca" (OuterVolumeSpecName: "client-ca") pod "ff5caa98-bd54-485f-a11e-46a25c98f82f" (UID: "ff5caa98-bd54-485f-a11e-46a25c98f82f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.021937 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.023086 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca" (OuterVolumeSpecName: "client-ca") pod "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.023964 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff5caa98-bd54-485f-a11e-46a25c98f82f-config" (OuterVolumeSpecName: "config") pod "ff5caa98-bd54-485f-a11e-46a25c98f82f" (UID: "ff5caa98-bd54-485f-a11e-46a25c98f82f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.025776 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff5caa98-bd54-485f-a11e-46a25c98f82f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ff5caa98-bd54-485f-a11e-46a25c98f82f" (UID: "ff5caa98-bd54-485f-a11e-46a25c98f82f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.025867 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4" (OuterVolumeSpecName: "kube-api-access-2wfb4") pod "ff5caa98-bd54-485f-a11e-46a25c98f82f" (UID: "ff5caa98-bd54-485f-a11e-46a25c98f82f"). InnerVolumeSpecName "kube-api-access-2wfb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.026040 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-kube-api-access-vr2rr" (OuterVolumeSpecName: "kube-api-access-vr2rr") pod "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83"). InnerVolumeSpecName "kube-api-access-vr2rr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.026971 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" (UID: "1d63e87d-00e8-4acc-a3b7-7464f0ec0c83"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.120721 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vr2rr\" (UniqueName: \"kubernetes.io/projected/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-kube-api-access-vr2rr\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.120749 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wfb4\" (UniqueName: \"kubernetes.io/projected/ff5caa98-bd54-485f-a11e-46a25c98f82f-kube-api-access-2wfb4\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.120758 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff5caa98-bd54-485f-a11e-46a25c98f82f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.120768 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff5caa98-bd54-485f-a11e-46a25c98f82f-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.120776 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.120784 4775 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.120792 4775 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.120816 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.120824 4775 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ff5caa98-bd54-485f-a11e-46a25c98f82f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.231394 4775 generic.go:334] "Generic (PLEG): container finished" podID="1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" containerID="653cbfc156c37e7a5562d09f0da4132ef85fffcbbfa7b0bb4dcb957ff881ec84" exitCode=0 Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.231508 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" event={"ID":"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83","Type":"ContainerDied","Data":"653cbfc156c37e7a5562d09f0da4132ef85fffcbbfa7b0bb4dcb957ff881ec84"} Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.231558 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" event={"ID":"1d63e87d-00e8-4acc-a3b7-7464f0ec0c83","Type":"ContainerDied","Data":"51613da25ddd6eb38c4ee47a22e6af21766feb4517f1baec6950dd90deefa0e9"} Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.231597 4775 scope.go:117] "RemoveContainer" containerID="653cbfc156c37e7a5562d09f0da4132ef85fffcbbfa7b0bb4dcb957ff881ec84" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.231854 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f759bc488-r96ss" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.236243 4775 generic.go:334] "Generic (PLEG): container finished" podID="ff5caa98-bd54-485f-a11e-46a25c98f82f" containerID="43a6dcab62e1108a909f51abe63c62d16a838878cd7dadce64232f1868dbd569" exitCode=0 Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.236270 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" event={"ID":"ff5caa98-bd54-485f-a11e-46a25c98f82f","Type":"ContainerDied","Data":"43a6dcab62e1108a909f51abe63c62d16a838878cd7dadce64232f1868dbd569"} Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.236285 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" event={"ID":"ff5caa98-bd54-485f-a11e-46a25c98f82f","Type":"ContainerDied","Data":"b411bfdbc4445453b98beff52c995d60e91303d316b63ebd7869ac7d9567858a"} Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.236316 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.264988 4775 scope.go:117] "RemoveContainer" containerID="653cbfc156c37e7a5562d09f0da4132ef85fffcbbfa7b0bb4dcb957ff881ec84" Jan 23 14:09:27 crc kubenswrapper[4775]: E0123 14:09:27.266065 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"653cbfc156c37e7a5562d09f0da4132ef85fffcbbfa7b0bb4dcb957ff881ec84\": container with ID starting with 653cbfc156c37e7a5562d09f0da4132ef85fffcbbfa7b0bb4dcb957ff881ec84 not found: ID does not exist" containerID="653cbfc156c37e7a5562d09f0da4132ef85fffcbbfa7b0bb4dcb957ff881ec84" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.266094 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"653cbfc156c37e7a5562d09f0da4132ef85fffcbbfa7b0bb4dcb957ff881ec84"} err="failed to get container status \"653cbfc156c37e7a5562d09f0da4132ef85fffcbbfa7b0bb4dcb957ff881ec84\": rpc error: code = NotFound desc = could not find container \"653cbfc156c37e7a5562d09f0da4132ef85fffcbbfa7b0bb4dcb957ff881ec84\": container with ID starting with 653cbfc156c37e7a5562d09f0da4132ef85fffcbbfa7b0bb4dcb957ff881ec84 not found: ID does not exist" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.266112 4775 scope.go:117] "RemoveContainer" containerID="43a6dcab62e1108a909f51abe63c62d16a838878cd7dadce64232f1868dbd569" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.270433 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq"] Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.274784 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-544cdfc94f-mdfkq"] Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.284885 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f759bc488-r96ss"] Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.290566 4775 scope.go:117] "RemoveContainer" containerID="43a6dcab62e1108a909f51abe63c62d16a838878cd7dadce64232f1868dbd569" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.292099 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-f759bc488-r96ss"] Jan 23 14:09:27 crc kubenswrapper[4775]: E0123 14:09:27.292208 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43a6dcab62e1108a909f51abe63c62d16a838878cd7dadce64232f1868dbd569\": container with ID starting with 43a6dcab62e1108a909f51abe63c62d16a838878cd7dadce64232f1868dbd569 not found: ID does not exist" containerID="43a6dcab62e1108a909f51abe63c62d16a838878cd7dadce64232f1868dbd569" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.292271 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43a6dcab62e1108a909f51abe63c62d16a838878cd7dadce64232f1868dbd569"} err="failed to get container status \"43a6dcab62e1108a909f51abe63c62d16a838878cd7dadce64232f1868dbd569\": rpc error: code = NotFound desc = could not find container \"43a6dcab62e1108a909f51abe63c62d16a838878cd7dadce64232f1868dbd569\": container with ID starting with 43a6dcab62e1108a909f51abe63c62d16a838878cd7dadce64232f1868dbd569 not found: ID does not exist" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.719590 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" path="/var/lib/kubelet/pods/1d63e87d-00e8-4acc-a3b7-7464f0ec0c83/volumes" Jan 23 14:09:27 crc kubenswrapper[4775]: I0123 14:09:27.720384 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff5caa98-bd54-485f-a11e-46a25c98f82f" path="/var/lib/kubelet/pods/ff5caa98-bd54-485f-a11e-46a25c98f82f/volumes" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.053051 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-785c4bb865-5xxrk"] Jan 23 14:09:28 crc kubenswrapper[4775]: E0123 14:09:28.053394 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff5caa98-bd54-485f-a11e-46a25c98f82f" containerName="route-controller-manager" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.053419 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff5caa98-bd54-485f-a11e-46a25c98f82f" containerName="route-controller-manager" Jan 23 14:09:28 crc kubenswrapper[4775]: E0123 14:09:28.053432 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.053442 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 23 14:09:28 crc kubenswrapper[4775]: E0123 14:09:28.053489 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" containerName="controller-manager" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.053499 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" containerName="controller-manager" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.053649 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff5caa98-bd54-485f-a11e-46a25c98f82f" containerName="route-controller-manager" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.053663 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d63e87d-00e8-4acc-a3b7-7464f0ec0c83" containerName="controller-manager" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.053696 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.054048 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.058029 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.058413 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.058586 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.058601 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.058621 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.058985 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.065161 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.078750 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd"] Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.083099 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.086018 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.086788 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.088059 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.088208 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.090276 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.090695 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.091175 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-785c4bb865-5xxrk"] Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.108515 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd"] Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.235385 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d30dd02-24bf-444b-bf37-a01716591d49-serving-cert\") pod \"route-controller-manager-5b89f6874d-69gnd\" (UID: \"3d30dd02-24bf-444b-bf37-a01716591d49\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.235458 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d30dd02-24bf-444b-bf37-a01716591d49-client-ca\") pod \"route-controller-manager-5b89f6874d-69gnd\" (UID: \"3d30dd02-24bf-444b-bf37-a01716591d49\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.235495 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-serving-cert\") pod \"controller-manager-785c4bb865-5xxrk\" (UID: \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\") " pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.235542 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d30dd02-24bf-444b-bf37-a01716591d49-config\") pod \"route-controller-manager-5b89f6874d-69gnd\" (UID: \"3d30dd02-24bf-444b-bf37-a01716591d49\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.235567 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-config\") pod \"controller-manager-785c4bb865-5xxrk\" (UID: \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\") " pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.235600 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tckq9\" (UniqueName: \"kubernetes.io/projected/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-kube-api-access-tckq9\") pod \"controller-manager-785c4bb865-5xxrk\" (UID: \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\") " pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.235626 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnvjw\" (UniqueName: \"kubernetes.io/projected/3d30dd02-24bf-444b-bf37-a01716591d49-kube-api-access-jnvjw\") pod \"route-controller-manager-5b89f6874d-69gnd\" (UID: \"3d30dd02-24bf-444b-bf37-a01716591d49\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.235654 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-proxy-ca-bundles\") pod \"controller-manager-785c4bb865-5xxrk\" (UID: \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\") " pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.235679 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-client-ca\") pod \"controller-manager-785c4bb865-5xxrk\" (UID: \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\") " pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.302696 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.338180 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnvjw\" (UniqueName: \"kubernetes.io/projected/3d30dd02-24bf-444b-bf37-a01716591d49-kube-api-access-jnvjw\") pod \"route-controller-manager-5b89f6874d-69gnd\" (UID: \"3d30dd02-24bf-444b-bf37-a01716591d49\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.338886 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-proxy-ca-bundles\") pod \"controller-manager-785c4bb865-5xxrk\" (UID: \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\") " pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.339130 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-client-ca\") pod \"controller-manager-785c4bb865-5xxrk\" (UID: \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\") " pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.339389 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d30dd02-24bf-444b-bf37-a01716591d49-serving-cert\") pod \"route-controller-manager-5b89f6874d-69gnd\" (UID: \"3d30dd02-24bf-444b-bf37-a01716591d49\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.339830 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-client-ca\") pod \"controller-manager-785c4bb865-5xxrk\" (UID: \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\") " pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.339968 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-proxy-ca-bundles\") pod \"controller-manager-785c4bb865-5xxrk\" (UID: \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\") " pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.340758 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d30dd02-24bf-444b-bf37-a01716591d49-client-ca\") pod \"route-controller-manager-5b89f6874d-69gnd\" (UID: \"3d30dd02-24bf-444b-bf37-a01716591d49\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.340819 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-serving-cert\") pod \"controller-manager-785c4bb865-5xxrk\" (UID: \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\") " pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.340938 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d30dd02-24bf-444b-bf37-a01716591d49-config\") pod \"route-controller-manager-5b89f6874d-69gnd\" (UID: \"3d30dd02-24bf-444b-bf37-a01716591d49\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.340975 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-config\") pod \"controller-manager-785c4bb865-5xxrk\" (UID: \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\") " pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.341030 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tckq9\" (UniqueName: \"kubernetes.io/projected/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-kube-api-access-tckq9\") pod \"controller-manager-785c4bb865-5xxrk\" (UID: \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\") " pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.341678 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d30dd02-24bf-444b-bf37-a01716591d49-client-ca\") pod \"route-controller-manager-5b89f6874d-69gnd\" (UID: \"3d30dd02-24bf-444b-bf37-a01716591d49\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.342323 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d30dd02-24bf-444b-bf37-a01716591d49-config\") pod \"route-controller-manager-5b89f6874d-69gnd\" (UID: \"3d30dd02-24bf-444b-bf37-a01716591d49\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.342775 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-config\") pod \"controller-manager-785c4bb865-5xxrk\" (UID: \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\") " pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.345045 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d30dd02-24bf-444b-bf37-a01716591d49-serving-cert\") pod \"route-controller-manager-5b89f6874d-69gnd\" (UID: \"3d30dd02-24bf-444b-bf37-a01716591d49\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.350949 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-serving-cert\") pod \"controller-manager-785c4bb865-5xxrk\" (UID: \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\") " pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.357143 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnvjw\" (UniqueName: \"kubernetes.io/projected/3d30dd02-24bf-444b-bf37-a01716591d49-kube-api-access-jnvjw\") pod \"route-controller-manager-5b89f6874d-69gnd\" (UID: \"3d30dd02-24bf-444b-bf37-a01716591d49\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.365320 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tckq9\" (UniqueName: \"kubernetes.io/projected/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-kube-api-access-tckq9\") pod \"controller-manager-785c4bb865-5xxrk\" (UID: \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\") " pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.381422 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.412727 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.594737 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-785c4bb865-5xxrk"] Jan 23 14:09:28 crc kubenswrapper[4775]: W0123 14:09:28.607932 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2bd0778a_d1f2_417f_acc4_e6cb92c96f45.slice/crio-9e9466cc4afe62937f0a277fd984621693d8d498e195874a416bc6b22d04e74c WatchSource:0}: Error finding container 9e9466cc4afe62937f0a277fd984621693d8d498e195874a416bc6b22d04e74c: Status 404 returned error can't find the container with id 9e9466cc4afe62937f0a277fd984621693d8d498e195874a416bc6b22d04e74c Jan 23 14:09:28 crc kubenswrapper[4775]: I0123 14:09:28.657166 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd"] Jan 23 14:09:29 crc kubenswrapper[4775]: I0123 14:09:29.250641 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" event={"ID":"3d30dd02-24bf-444b-bf37-a01716591d49","Type":"ContainerStarted","Data":"07b5183f82d839051054341b96c5c2531a81bc3b09e02b8f914291fa8161b35b"} Jan 23 14:09:29 crc kubenswrapper[4775]: I0123 14:09:29.251082 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" event={"ID":"3d30dd02-24bf-444b-bf37-a01716591d49","Type":"ContainerStarted","Data":"b2254ed186345bce63999097f48f317e358a300d29e3faa324494ca3b29d1a75"} Jan 23 14:09:29 crc kubenswrapper[4775]: I0123 14:09:29.251125 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" Jan 23 14:09:29 crc kubenswrapper[4775]: I0123 14:09:29.252483 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" event={"ID":"2bd0778a-d1f2-417f-acc4-e6cb92c96f45","Type":"ContainerStarted","Data":"17b6896c4af87db6f9c0269c156ed1d9d3db521d11c0cb4f7f748ba38f2f8bb6"} Jan 23 14:09:29 crc kubenswrapper[4775]: I0123 14:09:29.252524 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" event={"ID":"2bd0778a-d1f2-417f-acc4-e6cb92c96f45","Type":"ContainerStarted","Data":"9e9466cc4afe62937f0a277fd984621693d8d498e195874a416bc6b22d04e74c"} Jan 23 14:09:29 crc kubenswrapper[4775]: I0123 14:09:29.252820 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" Jan 23 14:09:29 crc kubenswrapper[4775]: I0123 14:09:29.257342 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" Jan 23 14:09:29 crc kubenswrapper[4775]: I0123 14:09:29.277996 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" podStartSLOduration=3.277788993 podStartE2EDuration="3.277788993s" podCreationTimestamp="2026-01-23 14:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:09:29.272995045 +0000 UTC m=+316.267823785" watchObservedRunningTime="2026-01-23 14:09:29.277788993 +0000 UTC m=+316.272617753" Jan 23 14:09:29 crc kubenswrapper[4775]: I0123 14:09:29.303935 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" podStartSLOduration=3.303919671 podStartE2EDuration="3.303919671s" podCreationTimestamp="2026-01-23 14:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:09:29.301915219 +0000 UTC m=+316.296743969" watchObservedRunningTime="2026-01-23 14:09:29.303919671 +0000 UTC m=+316.298748431" Jan 23 14:09:29 crc kubenswrapper[4775]: I0123 14:09:29.337498 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.119780 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.119886 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.261420 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.261519 4775 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="aff268ac61a1e94757e586a2d154e2ae45702e5030a24a5cd4532578fe0a281b" exitCode=137 Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.261641 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.261773 4775 scope.go:117] "RemoveContainer" containerID="aff268ac61a1e94757e586a2d154e2ae45702e5030a24a5cd4532578fe0a281b" Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.262281 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.262388 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.262792 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.262889 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.262930 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.262951 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.263080 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.263117 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.263138 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.263267 4775 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.263445 4775 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.263486 4775 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.263511 4775 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.276595 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.341211 4775 scope.go:117] "RemoveContainer" containerID="aff268ac61a1e94757e586a2d154e2ae45702e5030a24a5cd4532578fe0a281b" Jan 23 14:09:30 crc kubenswrapper[4775]: E0123 14:09:30.341732 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aff268ac61a1e94757e586a2d154e2ae45702e5030a24a5cd4532578fe0a281b\": container with ID starting with aff268ac61a1e94757e586a2d154e2ae45702e5030a24a5cd4532578fe0a281b not found: ID does not exist" containerID="aff268ac61a1e94757e586a2d154e2ae45702e5030a24a5cd4532578fe0a281b" Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.341793 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aff268ac61a1e94757e586a2d154e2ae45702e5030a24a5cd4532578fe0a281b"} err="failed to get container status \"aff268ac61a1e94757e586a2d154e2ae45702e5030a24a5cd4532578fe0a281b\": rpc error: code = NotFound desc = could not find container \"aff268ac61a1e94757e586a2d154e2ae45702e5030a24a5cd4532578fe0a281b\": container with ID starting with aff268ac61a1e94757e586a2d154e2ae45702e5030a24a5cd4532578fe0a281b not found: ID does not exist" Jan 23 14:09:30 crc kubenswrapper[4775]: I0123 14:09:30.364576 4775 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:31 crc kubenswrapper[4775]: I0123 14:09:31.720138 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 23 14:09:35 crc kubenswrapper[4775]: I0123 14:09:35.213123 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 23 14:09:46 crc kubenswrapper[4775]: I0123 14:09:46.562075 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-785c4bb865-5xxrk"] Jan 23 14:09:46 crc kubenswrapper[4775]: I0123 14:09:46.562582 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" podUID="2bd0778a-d1f2-417f-acc4-e6cb92c96f45" containerName="controller-manager" containerID="cri-o://17b6896c4af87db6f9c0269c156ed1d9d3db521d11c0cb4f7f748ba38f2f8bb6" gracePeriod=30 Jan 23 14:09:46 crc kubenswrapper[4775]: I0123 14:09:46.569735 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd"] Jan 23 14:09:46 crc kubenswrapper[4775]: I0123 14:09:46.569948 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" podUID="3d30dd02-24bf-444b-bf37-a01716591d49" containerName="route-controller-manager" containerID="cri-o://07b5183f82d839051054341b96c5c2531a81bc3b09e02b8f914291fa8161b35b" gracePeriod=30 Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.393285 4775 generic.go:334] "Generic (PLEG): container finished" podID="2bd0778a-d1f2-417f-acc4-e6cb92c96f45" containerID="17b6896c4af87db6f9c0269c156ed1d9d3db521d11c0cb4f7f748ba38f2f8bb6" exitCode=0 Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.393656 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" event={"ID":"2bd0778a-d1f2-417f-acc4-e6cb92c96f45","Type":"ContainerDied","Data":"17b6896c4af87db6f9c0269c156ed1d9d3db521d11c0cb4f7f748ba38f2f8bb6"} Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.399091 4775 generic.go:334] "Generic (PLEG): container finished" podID="3d30dd02-24bf-444b-bf37-a01716591d49" containerID="07b5183f82d839051054341b96c5c2531a81bc3b09e02b8f914291fa8161b35b" exitCode=0 Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.399392 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" event={"ID":"3d30dd02-24bf-444b-bf37-a01716591d49","Type":"ContainerDied","Data":"07b5183f82d839051054341b96c5c2531a81bc3b09e02b8f914291fa8161b35b"} Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.603097 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.607450 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.627357 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq"] Jan 23 14:09:47 crc kubenswrapper[4775]: E0123 14:09:47.627557 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bd0778a-d1f2-417f-acc4-e6cb92c96f45" containerName="controller-manager" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.627568 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bd0778a-d1f2-417f-acc4-e6cb92c96f45" containerName="controller-manager" Jan 23 14:09:47 crc kubenswrapper[4775]: E0123 14:09:47.627582 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d30dd02-24bf-444b-bf37-a01716591d49" containerName="route-controller-manager" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.627588 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d30dd02-24bf-444b-bf37-a01716591d49" containerName="route-controller-manager" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.627671 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d30dd02-24bf-444b-bf37-a01716591d49" containerName="route-controller-manager" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.627685 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bd0778a-d1f2-417f-acc4-e6cb92c96f45" containerName="controller-manager" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.628050 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.636311 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq"] Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.801059 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d30dd02-24bf-444b-bf37-a01716591d49-serving-cert\") pod \"3d30dd02-24bf-444b-bf37-a01716591d49\" (UID: \"3d30dd02-24bf-444b-bf37-a01716591d49\") " Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.801131 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d30dd02-24bf-444b-bf37-a01716591d49-config\") pod \"3d30dd02-24bf-444b-bf37-a01716591d49\" (UID: \"3d30dd02-24bf-444b-bf37-a01716591d49\") " Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.801156 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-proxy-ca-bundles\") pod \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\" (UID: \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\") " Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.801190 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnvjw\" (UniqueName: \"kubernetes.io/projected/3d30dd02-24bf-444b-bf37-a01716591d49-kube-api-access-jnvjw\") pod \"3d30dd02-24bf-444b-bf37-a01716591d49\" (UID: \"3d30dd02-24bf-444b-bf37-a01716591d49\") " Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.801219 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d30dd02-24bf-444b-bf37-a01716591d49-client-ca\") pod \"3d30dd02-24bf-444b-bf37-a01716591d49\" (UID: \"3d30dd02-24bf-444b-bf37-a01716591d49\") " Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.801242 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-serving-cert\") pod \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\" (UID: \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\") " Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.801275 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-client-ca\") pod \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\" (UID: \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\") " Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.801314 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-config\") pod \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\" (UID: \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\") " Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.801365 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tckq9\" (UniqueName: \"kubernetes.io/projected/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-kube-api-access-tckq9\") pod \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\" (UID: \"2bd0778a-d1f2-417f-acc4-e6cb92c96f45\") " Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.801503 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7ab4aa6-c476-4952-a259-e1e63a42bb69-config\") pod \"route-controller-manager-76946b564d-nl7wq\" (UID: \"d7ab4aa6-c476-4952-a259-e1e63a42bb69\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.801531 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7ab4aa6-c476-4952-a259-e1e63a42bb69-serving-cert\") pod \"route-controller-manager-76946b564d-nl7wq\" (UID: \"d7ab4aa6-c476-4952-a259-e1e63a42bb69\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.801554 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7ab4aa6-c476-4952-a259-e1e63a42bb69-client-ca\") pod \"route-controller-manager-76946b564d-nl7wq\" (UID: \"d7ab4aa6-c476-4952-a259-e1e63a42bb69\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.801598 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dch9g\" (UniqueName: \"kubernetes.io/projected/d7ab4aa6-c476-4952-a259-e1e63a42bb69-kube-api-access-dch9g\") pod \"route-controller-manager-76946b564d-nl7wq\" (UID: \"d7ab4aa6-c476-4952-a259-e1e63a42bb69\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.802365 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d30dd02-24bf-444b-bf37-a01716591d49-client-ca" (OuterVolumeSpecName: "client-ca") pod "3d30dd02-24bf-444b-bf37-a01716591d49" (UID: "3d30dd02-24bf-444b-bf37-a01716591d49"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.802387 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-client-ca" (OuterVolumeSpecName: "client-ca") pod "2bd0778a-d1f2-417f-acc4-e6cb92c96f45" (UID: "2bd0778a-d1f2-417f-acc4-e6cb92c96f45"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.802469 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-config" (OuterVolumeSpecName: "config") pod "2bd0778a-d1f2-417f-acc4-e6cb92c96f45" (UID: "2bd0778a-d1f2-417f-acc4-e6cb92c96f45"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.802507 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2bd0778a-d1f2-417f-acc4-e6cb92c96f45" (UID: "2bd0778a-d1f2-417f-acc4-e6cb92c96f45"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.802567 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d30dd02-24bf-444b-bf37-a01716591d49-config" (OuterVolumeSpecName: "config") pod "3d30dd02-24bf-444b-bf37-a01716591d49" (UID: "3d30dd02-24bf-444b-bf37-a01716591d49"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.807634 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2bd0778a-d1f2-417f-acc4-e6cb92c96f45" (UID: "2bd0778a-d1f2-417f-acc4-e6cb92c96f45"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.808211 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d30dd02-24bf-444b-bf37-a01716591d49-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3d30dd02-24bf-444b-bf37-a01716591d49" (UID: "3d30dd02-24bf-444b-bf37-a01716591d49"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.816951 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-kube-api-access-tckq9" (OuterVolumeSpecName: "kube-api-access-tckq9") pod "2bd0778a-d1f2-417f-acc4-e6cb92c96f45" (UID: "2bd0778a-d1f2-417f-acc4-e6cb92c96f45"). InnerVolumeSpecName "kube-api-access-tckq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.822023 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d30dd02-24bf-444b-bf37-a01716591d49-kube-api-access-jnvjw" (OuterVolumeSpecName: "kube-api-access-jnvjw") pod "3d30dd02-24bf-444b-bf37-a01716591d49" (UID: "3d30dd02-24bf-444b-bf37-a01716591d49"). InnerVolumeSpecName "kube-api-access-jnvjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.902777 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7ab4aa6-c476-4952-a259-e1e63a42bb69-config\") pod \"route-controller-manager-76946b564d-nl7wq\" (UID: \"d7ab4aa6-c476-4952-a259-e1e63a42bb69\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.903139 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7ab4aa6-c476-4952-a259-e1e63a42bb69-serving-cert\") pod \"route-controller-manager-76946b564d-nl7wq\" (UID: \"d7ab4aa6-c476-4952-a259-e1e63a42bb69\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.903165 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7ab4aa6-c476-4952-a259-e1e63a42bb69-client-ca\") pod \"route-controller-manager-76946b564d-nl7wq\" (UID: \"d7ab4aa6-c476-4952-a259-e1e63a42bb69\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.903222 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dch9g\" (UniqueName: \"kubernetes.io/projected/d7ab4aa6-c476-4952-a259-e1e63a42bb69-kube-api-access-dch9g\") pod \"route-controller-manager-76946b564d-nl7wq\" (UID: \"d7ab4aa6-c476-4952-a259-e1e63a42bb69\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.903280 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3d30dd02-24bf-444b-bf37-a01716591d49-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.903292 4775 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.903302 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnvjw\" (UniqueName: \"kubernetes.io/projected/3d30dd02-24bf-444b-bf37-a01716591d49-kube-api-access-jnvjw\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.903314 4775 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3d30dd02-24bf-444b-bf37-a01716591d49-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.903325 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.903336 4775 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.903348 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.903360 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tckq9\" (UniqueName: \"kubernetes.io/projected/2bd0778a-d1f2-417f-acc4-e6cb92c96f45-kube-api-access-tckq9\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.903368 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3d30dd02-24bf-444b-bf37-a01716591d49-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.904121 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7ab4aa6-c476-4952-a259-e1e63a42bb69-config\") pod \"route-controller-manager-76946b564d-nl7wq\" (UID: \"d7ab4aa6-c476-4952-a259-e1e63a42bb69\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.904332 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7ab4aa6-c476-4952-a259-e1e63a42bb69-client-ca\") pod \"route-controller-manager-76946b564d-nl7wq\" (UID: \"d7ab4aa6-c476-4952-a259-e1e63a42bb69\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.908722 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7ab4aa6-c476-4952-a259-e1e63a42bb69-serving-cert\") pod \"route-controller-manager-76946b564d-nl7wq\" (UID: \"d7ab4aa6-c476-4952-a259-e1e63a42bb69\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.931640 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dch9g\" (UniqueName: \"kubernetes.io/projected/d7ab4aa6-c476-4952-a259-e1e63a42bb69-kube-api-access-dch9g\") pod \"route-controller-manager-76946b564d-nl7wq\" (UID: \"d7ab4aa6-c476-4952-a259-e1e63a42bb69\") " pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" Jan 23 14:09:47 crc kubenswrapper[4775]: I0123 14:09:47.941339 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" Jan 23 14:09:48 crc kubenswrapper[4775]: I0123 14:09:48.160942 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq"] Jan 23 14:09:48 crc kubenswrapper[4775]: W0123 14:09:48.165660 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7ab4aa6_c476_4952_a259_e1e63a42bb69.slice/crio-0d59494029faa0dc8c83935b2a8d96eb1666ed423d428c52740a79423310818f WatchSource:0}: Error finding container 0d59494029faa0dc8c83935b2a8d96eb1666ed423d428c52740a79423310818f: Status 404 returned error can't find the container with id 0d59494029faa0dc8c83935b2a8d96eb1666ed423d428c52740a79423310818f Jan 23 14:09:48 crc kubenswrapper[4775]: I0123 14:09:48.405896 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" event={"ID":"d7ab4aa6-c476-4952-a259-e1e63a42bb69","Type":"ContainerStarted","Data":"781a04fc229c3442a54b74394d8d8073527ad1460a3c3be51f6f7244137482ea"} Jan 23 14:09:48 crc kubenswrapper[4775]: I0123 14:09:48.406242 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" Jan 23 14:09:48 crc kubenswrapper[4775]: I0123 14:09:48.406254 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" event={"ID":"d7ab4aa6-c476-4952-a259-e1e63a42bb69","Type":"ContainerStarted","Data":"0d59494029faa0dc8c83935b2a8d96eb1666ed423d428c52740a79423310818f"} Jan 23 14:09:48 crc kubenswrapper[4775]: I0123 14:09:48.407483 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" Jan 23 14:09:48 crc kubenswrapper[4775]: I0123 14:09:48.407499 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-785c4bb865-5xxrk" event={"ID":"2bd0778a-d1f2-417f-acc4-e6cb92c96f45","Type":"ContainerDied","Data":"9e9466cc4afe62937f0a277fd984621693d8d498e195874a416bc6b22d04e74c"} Jan 23 14:09:48 crc kubenswrapper[4775]: I0123 14:09:48.407582 4775 scope.go:117] "RemoveContainer" containerID="17b6896c4af87db6f9c0269c156ed1d9d3db521d11c0cb4f7f748ba38f2f8bb6" Jan 23 14:09:48 crc kubenswrapper[4775]: I0123 14:09:48.409256 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" event={"ID":"3d30dd02-24bf-444b-bf37-a01716591d49","Type":"ContainerDied","Data":"b2254ed186345bce63999097f48f317e358a300d29e3faa324494ca3b29d1a75"} Jan 23 14:09:48 crc kubenswrapper[4775]: I0123 14:09:48.409310 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd" Jan 23 14:09:48 crc kubenswrapper[4775]: I0123 14:09:48.424928 4775 scope.go:117] "RemoveContainer" containerID="07b5183f82d839051054341b96c5c2531a81bc3b09e02b8f914291fa8161b35b" Jan 23 14:09:48 crc kubenswrapper[4775]: I0123 14:09:48.431090 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" podStartSLOduration=2.43107599 podStartE2EDuration="2.43107599s" podCreationTimestamp="2026-01-23 14:09:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:09:48.43043315 +0000 UTC m=+335.425261910" watchObservedRunningTime="2026-01-23 14:09:48.43107599 +0000 UTC m=+335.425904730" Jan 23 14:09:48 crc kubenswrapper[4775]: I0123 14:09:48.440895 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd"] Jan 23 14:09:48 crc kubenswrapper[4775]: I0123 14:09:48.445617 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b89f6874d-69gnd"] Jan 23 14:09:48 crc kubenswrapper[4775]: I0123 14:09:48.455364 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-785c4bb865-5xxrk"] Jan 23 14:09:48 crc kubenswrapper[4775]: I0123 14:09:48.458540 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-785c4bb865-5xxrk"] Jan 23 14:09:48 crc kubenswrapper[4775]: I0123 14:09:48.544916 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" Jan 23 14:09:49 crc kubenswrapper[4775]: I0123 14:09:49.724163 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bd0778a-d1f2-417f-acc4-e6cb92c96f45" path="/var/lib/kubelet/pods/2bd0778a-d1f2-417f-acc4-e6cb92c96f45/volumes" Jan 23 14:09:49 crc kubenswrapper[4775]: I0123 14:09:49.729329 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d30dd02-24bf-444b-bf37-a01716591d49" path="/var/lib/kubelet/pods/3d30dd02-24bf-444b-bf37-a01716591d49/volumes" Jan 23 14:09:49 crc kubenswrapper[4775]: I0123 14:09:49.914550 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.079497 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d6f97d578-2hjdt"] Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.080868 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.084641 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.085088 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.085924 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.086394 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.086664 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.089348 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.099038 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.101372 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d6f97d578-2hjdt"] Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.231967 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2b9e347-4937-4835-b496-178073507714-config\") pod \"controller-manager-d6f97d578-2hjdt\" (UID: \"f2b9e347-4937-4835-b496-178073507714\") " pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.232044 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f2b9e347-4937-4835-b496-178073507714-proxy-ca-bundles\") pod \"controller-manager-d6f97d578-2hjdt\" (UID: \"f2b9e347-4937-4835-b496-178073507714\") " pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.232088 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2b9e347-4937-4835-b496-178073507714-client-ca\") pod \"controller-manager-d6f97d578-2hjdt\" (UID: \"f2b9e347-4937-4835-b496-178073507714\") " pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.232136 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2b9e347-4937-4835-b496-178073507714-serving-cert\") pod \"controller-manager-d6f97d578-2hjdt\" (UID: \"f2b9e347-4937-4835-b496-178073507714\") " pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.232173 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2gwg\" (UniqueName: \"kubernetes.io/projected/f2b9e347-4937-4835-b496-178073507714-kube-api-access-b2gwg\") pod \"controller-manager-d6f97d578-2hjdt\" (UID: \"f2b9e347-4937-4835-b496-178073507714\") " pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.333933 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2b9e347-4937-4835-b496-178073507714-config\") pod \"controller-manager-d6f97d578-2hjdt\" (UID: \"f2b9e347-4937-4835-b496-178073507714\") " pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.334036 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f2b9e347-4937-4835-b496-178073507714-proxy-ca-bundles\") pod \"controller-manager-d6f97d578-2hjdt\" (UID: \"f2b9e347-4937-4835-b496-178073507714\") " pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.334072 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2b9e347-4937-4835-b496-178073507714-client-ca\") pod \"controller-manager-d6f97d578-2hjdt\" (UID: \"f2b9e347-4937-4835-b496-178073507714\") " pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.334124 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2b9e347-4937-4835-b496-178073507714-serving-cert\") pod \"controller-manager-d6f97d578-2hjdt\" (UID: \"f2b9e347-4937-4835-b496-178073507714\") " pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.334161 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2gwg\" (UniqueName: \"kubernetes.io/projected/f2b9e347-4937-4835-b496-178073507714-kube-api-access-b2gwg\") pod \"controller-manager-d6f97d578-2hjdt\" (UID: \"f2b9e347-4937-4835-b496-178073507714\") " pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.336536 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2b9e347-4937-4835-b496-178073507714-client-ca\") pod \"controller-manager-d6f97d578-2hjdt\" (UID: \"f2b9e347-4937-4835-b496-178073507714\") " pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.336686 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f2b9e347-4937-4835-b496-178073507714-proxy-ca-bundles\") pod \"controller-manager-d6f97d578-2hjdt\" (UID: \"f2b9e347-4937-4835-b496-178073507714\") " pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.338330 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2b9e347-4937-4835-b496-178073507714-config\") pod \"controller-manager-d6f97d578-2hjdt\" (UID: \"f2b9e347-4937-4835-b496-178073507714\") " pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.350768 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2b9e347-4937-4835-b496-178073507714-serving-cert\") pod \"controller-manager-d6f97d578-2hjdt\" (UID: \"f2b9e347-4937-4835-b496-178073507714\") " pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.368150 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2gwg\" (UniqueName: \"kubernetes.io/projected/f2b9e347-4937-4835-b496-178073507714-kube-api-access-b2gwg\") pod \"controller-manager-d6f97d578-2hjdt\" (UID: \"f2b9e347-4937-4835-b496-178073507714\") " pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.409325 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" Jan 23 14:09:50 crc kubenswrapper[4775]: I0123 14:09:50.694161 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d6f97d578-2hjdt"] Jan 23 14:09:51 crc kubenswrapper[4775]: I0123 14:09:51.427882 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" event={"ID":"f2b9e347-4937-4835-b496-178073507714","Type":"ContainerStarted","Data":"0acc0ad8ec8e8be9769a28309cee2b6e18eb66e5c98ef58afd161b49ec1c7bb0"} Jan 23 14:09:51 crc kubenswrapper[4775]: I0123 14:09:51.428194 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" event={"ID":"f2b9e347-4937-4835-b496-178073507714","Type":"ContainerStarted","Data":"7d9751b50e071ccb4609de4a7a32972dacdb52e1f06dc5123bad488447e2ce18"} Jan 23 14:09:51 crc kubenswrapper[4775]: I0123 14:09:51.429456 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" Jan 23 14:09:51 crc kubenswrapper[4775]: I0123 14:09:51.435143 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" Jan 23 14:09:51 crc kubenswrapper[4775]: I0123 14:09:51.456123 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" podStartSLOduration=5.456096826 podStartE2EDuration="5.456096826s" podCreationTimestamp="2026-01-23 14:09:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:09:51.451986815 +0000 UTC m=+338.446815585" watchObservedRunningTime="2026-01-23 14:09:51.456096826 +0000 UTC m=+338.450925576" Jan 23 14:09:53 crc kubenswrapper[4775]: I0123 14:09:53.219730 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:09:53 crc kubenswrapper[4775]: I0123 14:09:53.219881 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.410624 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-7ld89"] Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.412156 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.432733 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-7ld89"] Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.567832 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.568260 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9856\" (UniqueName: \"kubernetes.io/projected/7f5d763e-6546-4013-9a83-b3c24c48d8bb-kube-api-access-c9856\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.568533 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7f5d763e-6546-4013-9a83-b3c24c48d8bb-trusted-ca\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.568780 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7f5d763e-6546-4013-9a83-b3c24c48d8bb-ca-trust-extracted\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.569039 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7f5d763e-6546-4013-9a83-b3c24c48d8bb-registry-certificates\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.569377 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7f5d763e-6546-4013-9a83-b3c24c48d8bb-registry-tls\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.569861 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7f5d763e-6546-4013-9a83-b3c24c48d8bb-installation-pull-secrets\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.570230 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7f5d763e-6546-4013-9a83-b3c24c48d8bb-bound-sa-token\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.602226 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.677162 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7f5d763e-6546-4013-9a83-b3c24c48d8bb-bound-sa-token\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.677440 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9856\" (UniqueName: \"kubernetes.io/projected/7f5d763e-6546-4013-9a83-b3c24c48d8bb-kube-api-access-c9856\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.677602 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7f5d763e-6546-4013-9a83-b3c24c48d8bb-trusted-ca\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.677697 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7f5d763e-6546-4013-9a83-b3c24c48d8bb-ca-trust-extracted\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.677772 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7f5d763e-6546-4013-9a83-b3c24c48d8bb-registry-certificates\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.677895 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7f5d763e-6546-4013-9a83-b3c24c48d8bb-registry-tls\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.677991 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7f5d763e-6546-4013-9a83-b3c24c48d8bb-installation-pull-secrets\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.680280 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7f5d763e-6546-4013-9a83-b3c24c48d8bb-trusted-ca\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.680874 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7f5d763e-6546-4013-9a83-b3c24c48d8bb-ca-trust-extracted\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.682470 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7f5d763e-6546-4013-9a83-b3c24c48d8bb-registry-certificates\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.688778 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7f5d763e-6546-4013-9a83-b3c24c48d8bb-registry-tls\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.694754 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7f5d763e-6546-4013-9a83-b3c24c48d8bb-installation-pull-secrets\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.709967 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7f5d763e-6546-4013-9a83-b3c24c48d8bb-bound-sa-token\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.713085 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9856\" (UniqueName: \"kubernetes.io/projected/7f5d763e-6546-4013-9a83-b3c24c48d8bb-kube-api-access-c9856\") pod \"image-registry-66df7c8f76-7ld89\" (UID: \"7f5d763e-6546-4013-9a83-b3c24c48d8bb\") " pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:18 crc kubenswrapper[4775]: I0123 14:10:18.735851 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:19 crc kubenswrapper[4775]: I0123 14:10:19.196625 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-7ld89"] Jan 23 14:10:19 crc kubenswrapper[4775]: W0123 14:10:19.203702 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f5d763e_6546_4013_9a83_b3c24c48d8bb.slice/crio-493cf16521d0c8904fd5e41b29dee53ccd1c2e8093db7361429d9e6c6c0b3a1d WatchSource:0}: Error finding container 493cf16521d0c8904fd5e41b29dee53ccd1c2e8093db7361429d9e6c6c0b3a1d: Status 404 returned error can't find the container with id 493cf16521d0c8904fd5e41b29dee53ccd1c2e8093db7361429d9e6c6c0b3a1d Jan 23 14:10:19 crc kubenswrapper[4775]: I0123 14:10:19.602834 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" event={"ID":"7f5d763e-6546-4013-9a83-b3c24c48d8bb","Type":"ContainerStarted","Data":"adbaf0b34ee6fc3e6a5ca459e4fefdddc60f614f4c1583b24f4330225ec9c59d"} Jan 23 14:10:19 crc kubenswrapper[4775]: I0123 14:10:19.602877 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" event={"ID":"7f5d763e-6546-4013-9a83-b3c24c48d8bb","Type":"ContainerStarted","Data":"493cf16521d0c8904fd5e41b29dee53ccd1c2e8093db7361429d9e6c6c0b3a1d"} Jan 23 14:10:19 crc kubenswrapper[4775]: I0123 14:10:19.602998 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:23 crc kubenswrapper[4775]: I0123 14:10:23.219307 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:10:23 crc kubenswrapper[4775]: I0123 14:10:23.219915 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:10:26 crc kubenswrapper[4775]: I0123 14:10:26.564837 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" podStartSLOduration=8.56479648 podStartE2EDuration="8.56479648s" podCreationTimestamp="2026-01-23 14:10:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:10:19.622948123 +0000 UTC m=+366.617776873" watchObservedRunningTime="2026-01-23 14:10:26.56479648 +0000 UTC m=+373.559625230" Jan 23 14:10:26 crc kubenswrapper[4775]: I0123 14:10:26.565586 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d6f97d578-2hjdt"] Jan 23 14:10:26 crc kubenswrapper[4775]: I0123 14:10:26.565781 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" podUID="f2b9e347-4937-4835-b496-178073507714" containerName="controller-manager" containerID="cri-o://0acc0ad8ec8e8be9769a28309cee2b6e18eb66e5c98ef58afd161b49ec1c7bb0" gracePeriod=30 Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.031205 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.174094 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f2b9e347-4937-4835-b496-178073507714-proxy-ca-bundles\") pod \"f2b9e347-4937-4835-b496-178073507714\" (UID: \"f2b9e347-4937-4835-b496-178073507714\") " Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.174175 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2gwg\" (UniqueName: \"kubernetes.io/projected/f2b9e347-4937-4835-b496-178073507714-kube-api-access-b2gwg\") pod \"f2b9e347-4937-4835-b496-178073507714\" (UID: \"f2b9e347-4937-4835-b496-178073507714\") " Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.174987 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2b9e347-4937-4835-b496-178073507714-config\") pod \"f2b9e347-4937-4835-b496-178073507714\" (UID: \"f2b9e347-4937-4835-b496-178073507714\") " Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.175027 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2b9e347-4937-4835-b496-178073507714-client-ca\") pod \"f2b9e347-4937-4835-b496-178073507714\" (UID: \"f2b9e347-4937-4835-b496-178073507714\") " Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.175055 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2b9e347-4937-4835-b496-178073507714-serving-cert\") pod \"f2b9e347-4937-4835-b496-178073507714\" (UID: \"f2b9e347-4937-4835-b496-178073507714\") " Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.175613 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2b9e347-4937-4835-b496-178073507714-client-ca" (OuterVolumeSpecName: "client-ca") pod "f2b9e347-4937-4835-b496-178073507714" (UID: "f2b9e347-4937-4835-b496-178073507714"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.175549 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2b9e347-4937-4835-b496-178073507714-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f2b9e347-4937-4835-b496-178073507714" (UID: "f2b9e347-4937-4835-b496-178073507714"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.176026 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2b9e347-4937-4835-b496-178073507714-config" (OuterVolumeSpecName: "config") pod "f2b9e347-4937-4835-b496-178073507714" (UID: "f2b9e347-4937-4835-b496-178073507714"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.181039 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2b9e347-4937-4835-b496-178073507714-kube-api-access-b2gwg" (OuterVolumeSpecName: "kube-api-access-b2gwg") pod "f2b9e347-4937-4835-b496-178073507714" (UID: "f2b9e347-4937-4835-b496-178073507714"). InnerVolumeSpecName "kube-api-access-b2gwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.185481 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2b9e347-4937-4835-b496-178073507714-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f2b9e347-4937-4835-b496-178073507714" (UID: "f2b9e347-4937-4835-b496-178073507714"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.275820 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2b9e347-4937-4835-b496-178073507714-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.275866 4775 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f2b9e347-4937-4835-b496-178073507714-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.275878 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2b9e347-4937-4835-b496-178073507714-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.275890 4775 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f2b9e347-4937-4835-b496-178073507714-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.275904 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2gwg\" (UniqueName: \"kubernetes.io/projected/f2b9e347-4937-4835-b496-178073507714-kube-api-access-b2gwg\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.659436 4775 generic.go:334] "Generic (PLEG): container finished" podID="f2b9e347-4937-4835-b496-178073507714" containerID="0acc0ad8ec8e8be9769a28309cee2b6e18eb66e5c98ef58afd161b49ec1c7bb0" exitCode=0 Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.659473 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" event={"ID":"f2b9e347-4937-4835-b496-178073507714","Type":"ContainerDied","Data":"0acc0ad8ec8e8be9769a28309cee2b6e18eb66e5c98ef58afd161b49ec1c7bb0"} Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.659503 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" event={"ID":"f2b9e347-4937-4835-b496-178073507714","Type":"ContainerDied","Data":"7d9751b50e071ccb4609de4a7a32972dacdb52e1f06dc5123bad488447e2ce18"} Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.659522 4775 scope.go:117] "RemoveContainer" containerID="0acc0ad8ec8e8be9769a28309cee2b6e18eb66e5c98ef58afd161b49ec1c7bb0" Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.659526 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d6f97d578-2hjdt" Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.687883 4775 scope.go:117] "RemoveContainer" containerID="0acc0ad8ec8e8be9769a28309cee2b6e18eb66e5c98ef58afd161b49ec1c7bb0" Jan 23 14:10:27 crc kubenswrapper[4775]: E0123 14:10:27.688597 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0acc0ad8ec8e8be9769a28309cee2b6e18eb66e5c98ef58afd161b49ec1c7bb0\": container with ID starting with 0acc0ad8ec8e8be9769a28309cee2b6e18eb66e5c98ef58afd161b49ec1c7bb0 not found: ID does not exist" containerID="0acc0ad8ec8e8be9769a28309cee2b6e18eb66e5c98ef58afd161b49ec1c7bb0" Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.688634 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0acc0ad8ec8e8be9769a28309cee2b6e18eb66e5c98ef58afd161b49ec1c7bb0"} err="failed to get container status \"0acc0ad8ec8e8be9769a28309cee2b6e18eb66e5c98ef58afd161b49ec1c7bb0\": rpc error: code = NotFound desc = could not find container \"0acc0ad8ec8e8be9769a28309cee2b6e18eb66e5c98ef58afd161b49ec1c7bb0\": container with ID starting with 0acc0ad8ec8e8be9769a28309cee2b6e18eb66e5c98ef58afd161b49ec1c7bb0 not found: ID does not exist" Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.707315 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d6f97d578-2hjdt"] Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.712911 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-d6f97d578-2hjdt"] Jan 23 14:10:27 crc kubenswrapper[4775]: I0123 14:10:27.726546 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2b9e347-4937-4835-b496-178073507714" path="/var/lib/kubelet/pods/f2b9e347-4937-4835-b496-178073507714/volumes" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.111630 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-785c4bb865-6kdh2"] Jan 23 14:10:28 crc kubenswrapper[4775]: E0123 14:10:28.112132 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2b9e347-4937-4835-b496-178073507714" containerName="controller-manager" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.112160 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2b9e347-4937-4835-b496-178073507714" containerName="controller-manager" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.112314 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2b9e347-4937-4835-b496-178073507714" containerName="controller-manager" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.113005 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-785c4bb865-6kdh2" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.115433 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.117790 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.118333 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.118587 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.120907 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.121065 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.127098 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-785c4bb865-6kdh2"] Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.131169 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.290673 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0875fb84-cf98-476b-9330-e28814be3bfe-config\") pod \"controller-manager-785c4bb865-6kdh2\" (UID: \"0875fb84-cf98-476b-9330-e28814be3bfe\") " pod="openshift-controller-manager/controller-manager-785c4bb865-6kdh2" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.290753 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0875fb84-cf98-476b-9330-e28814be3bfe-serving-cert\") pod \"controller-manager-785c4bb865-6kdh2\" (UID: \"0875fb84-cf98-476b-9330-e28814be3bfe\") " pod="openshift-controller-manager/controller-manager-785c4bb865-6kdh2" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.290859 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xqrc\" (UniqueName: \"kubernetes.io/projected/0875fb84-cf98-476b-9330-e28814be3bfe-kube-api-access-2xqrc\") pod \"controller-manager-785c4bb865-6kdh2\" (UID: \"0875fb84-cf98-476b-9330-e28814be3bfe\") " pod="openshift-controller-manager/controller-manager-785c4bb865-6kdh2" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.290890 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0875fb84-cf98-476b-9330-e28814be3bfe-client-ca\") pod \"controller-manager-785c4bb865-6kdh2\" (UID: \"0875fb84-cf98-476b-9330-e28814be3bfe\") " pod="openshift-controller-manager/controller-manager-785c4bb865-6kdh2" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.291006 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0875fb84-cf98-476b-9330-e28814be3bfe-proxy-ca-bundles\") pod \"controller-manager-785c4bb865-6kdh2\" (UID: \"0875fb84-cf98-476b-9330-e28814be3bfe\") " pod="openshift-controller-manager/controller-manager-785c4bb865-6kdh2" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.392320 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0875fb84-cf98-476b-9330-e28814be3bfe-proxy-ca-bundles\") pod \"controller-manager-785c4bb865-6kdh2\" (UID: \"0875fb84-cf98-476b-9330-e28814be3bfe\") " pod="openshift-controller-manager/controller-manager-785c4bb865-6kdh2" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.392382 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0875fb84-cf98-476b-9330-e28814be3bfe-config\") pod \"controller-manager-785c4bb865-6kdh2\" (UID: \"0875fb84-cf98-476b-9330-e28814be3bfe\") " pod="openshift-controller-manager/controller-manager-785c4bb865-6kdh2" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.392424 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0875fb84-cf98-476b-9330-e28814be3bfe-serving-cert\") pod \"controller-manager-785c4bb865-6kdh2\" (UID: \"0875fb84-cf98-476b-9330-e28814be3bfe\") " pod="openshift-controller-manager/controller-manager-785c4bb865-6kdh2" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.392470 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xqrc\" (UniqueName: \"kubernetes.io/projected/0875fb84-cf98-476b-9330-e28814be3bfe-kube-api-access-2xqrc\") pod \"controller-manager-785c4bb865-6kdh2\" (UID: \"0875fb84-cf98-476b-9330-e28814be3bfe\") " pod="openshift-controller-manager/controller-manager-785c4bb865-6kdh2" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.392498 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0875fb84-cf98-476b-9330-e28814be3bfe-client-ca\") pod \"controller-manager-785c4bb865-6kdh2\" (UID: \"0875fb84-cf98-476b-9330-e28814be3bfe\") " pod="openshift-controller-manager/controller-manager-785c4bb865-6kdh2" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.393617 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0875fb84-cf98-476b-9330-e28814be3bfe-client-ca\") pod \"controller-manager-785c4bb865-6kdh2\" (UID: \"0875fb84-cf98-476b-9330-e28814be3bfe\") " pod="openshift-controller-manager/controller-manager-785c4bb865-6kdh2" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.393866 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0875fb84-cf98-476b-9330-e28814be3bfe-config\") pod \"controller-manager-785c4bb865-6kdh2\" (UID: \"0875fb84-cf98-476b-9330-e28814be3bfe\") " pod="openshift-controller-manager/controller-manager-785c4bb865-6kdh2" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.393968 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0875fb84-cf98-476b-9330-e28814be3bfe-proxy-ca-bundles\") pod \"controller-manager-785c4bb865-6kdh2\" (UID: \"0875fb84-cf98-476b-9330-e28814be3bfe\") " pod="openshift-controller-manager/controller-manager-785c4bb865-6kdh2" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.403916 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0875fb84-cf98-476b-9330-e28814be3bfe-serving-cert\") pod \"controller-manager-785c4bb865-6kdh2\" (UID: \"0875fb84-cf98-476b-9330-e28814be3bfe\") " pod="openshift-controller-manager/controller-manager-785c4bb865-6kdh2" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.419083 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xqrc\" (UniqueName: \"kubernetes.io/projected/0875fb84-cf98-476b-9330-e28814be3bfe-kube-api-access-2xqrc\") pod \"controller-manager-785c4bb865-6kdh2\" (UID: \"0875fb84-cf98-476b-9330-e28814be3bfe\") " pod="openshift-controller-manager/controller-manager-785c4bb865-6kdh2" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.441151 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-785c4bb865-6kdh2" Jan 23 14:10:28 crc kubenswrapper[4775]: I0123 14:10:28.844049 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-785c4bb865-6kdh2"] Jan 23 14:10:28 crc kubenswrapper[4775]: W0123 14:10:28.852620 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0875fb84_cf98_476b_9330_e28814be3bfe.slice/crio-776ec008f6473445186534a00afdb22793be50e8f1c3b81db5ce335c10a3619a WatchSource:0}: Error finding container 776ec008f6473445186534a00afdb22793be50e8f1c3b81db5ce335c10a3619a: Status 404 returned error can't find the container with id 776ec008f6473445186534a00afdb22793be50e8f1c3b81db5ce335c10a3619a Jan 23 14:10:29 crc kubenswrapper[4775]: I0123 14:10:29.673693 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-785c4bb865-6kdh2" event={"ID":"0875fb84-cf98-476b-9330-e28814be3bfe","Type":"ContainerStarted","Data":"6e88197c7c21036b5ee13c2b295b846ddf6410d5ce528b60103ac45da754af27"} Jan 23 14:10:29 crc kubenswrapper[4775]: I0123 14:10:29.673774 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-785c4bb865-6kdh2" event={"ID":"0875fb84-cf98-476b-9330-e28814be3bfe","Type":"ContainerStarted","Data":"776ec008f6473445186534a00afdb22793be50e8f1c3b81db5ce335c10a3619a"} Jan 23 14:10:29 crc kubenswrapper[4775]: I0123 14:10:29.674119 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-785c4bb865-6kdh2" Jan 23 14:10:29 crc kubenswrapper[4775]: I0123 14:10:29.681572 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-785c4bb865-6kdh2" Jan 23 14:10:29 crc kubenswrapper[4775]: I0123 14:10:29.696324 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-785c4bb865-6kdh2" podStartSLOduration=3.6962912279999998 podStartE2EDuration="3.696291228s" podCreationTimestamp="2026-01-23 14:10:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:10:29.693507719 +0000 UTC m=+376.688336449" watchObservedRunningTime="2026-01-23 14:10:29.696291228 +0000 UTC m=+376.691120008" Jan 23 14:10:38 crc kubenswrapper[4775]: I0123 14:10:38.741991 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-7ld89" Jan 23 14:10:38 crc kubenswrapper[4775]: I0123 14:10:38.801124 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xpwjl"] Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.345419 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-285dn"] Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.347457 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-285dn" podUID="1b219edd-2ebd-4968-b427-ec555eade68c" containerName="registry-server" containerID="cri-o://5d5b3239c4354bbf8668793adb57fca35d10a6d969fbc9bd29c2463925617ab2" gracePeriod=30 Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.356824 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2q2jj"] Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.357688 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2q2jj" podUID="8bb5169a-229e-4d38-beea-4783c11d0098" containerName="registry-server" containerID="cri-o://c7260cd3d625fa792d5d94bcaae087826a69b9166dd1b6258fd35d2e1bd77b66" gracePeriod=30 Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.370565 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pmcq8"] Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.371017 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" podUID="8ac48e42-bde7-4701-b994-825906603b06" containerName="marketplace-operator" containerID="cri-o://f51d1a8b2d530002962d11af10b4a9dc9403d48b6849c26ac64175b119f21f51" gracePeriod=30 Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.384024 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q6l68"] Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.384412 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q6l68" podUID="e59d5724-424f-4151-98a4-c2cfa3918ac0" containerName="registry-server" containerID="cri-o://706b207c906b11477ffafcc96a740d5e3fd0c32011317bda62a73b4005aa1b8f" gracePeriod=30 Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.396067 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-84gx7"] Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.396327 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-24s7d"] Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.397076 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-24s7d" Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.402758 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-24s7d"] Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.440332 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-84gx7" podUID="0e3253a9-fac0-401c-8e02-52758dbc40f3" containerName="registry-server" containerID="cri-o://d42ef899e57f6183a5f1a3a8ba0663646429d61c6d74c35df738852826152a1c" gracePeriod=30 Jan 23 14:10:50 crc kubenswrapper[4775]: E0123 14:10:50.470268 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 706b207c906b11477ffafcc96a740d5e3fd0c32011317bda62a73b4005aa1b8f is running failed: container process not found" containerID="706b207c906b11477ffafcc96a740d5e3fd0c32011317bda62a73b4005aa1b8f" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 14:10:50 crc kubenswrapper[4775]: E0123 14:10:50.471098 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 706b207c906b11477ffafcc96a740d5e3fd0c32011317bda62a73b4005aa1b8f is running failed: container process not found" containerID="706b207c906b11477ffafcc96a740d5e3fd0c32011317bda62a73b4005aa1b8f" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 14:10:50 crc kubenswrapper[4775]: E0123 14:10:50.471523 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 706b207c906b11477ffafcc96a740d5e3fd0c32011317bda62a73b4005aa1b8f is running failed: container process not found" containerID="706b207c906b11477ffafcc96a740d5e3fd0c32011317bda62a73b4005aa1b8f" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 14:10:50 crc kubenswrapper[4775]: E0123 14:10:50.471554 4775 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 706b207c906b11477ffafcc96a740d5e3fd0c32011317bda62a73b4005aa1b8f is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-q6l68" podUID="e59d5724-424f-4151-98a4-c2cfa3918ac0" containerName="registry-server" Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.499781 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ffa6638c-aaa0-418b-ad22-e5532ae16f68-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-24s7d\" (UID: \"ffa6638c-aaa0-418b-ad22-e5532ae16f68\") " pod="openshift-marketplace/marketplace-operator-79b997595-24s7d" Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.499862 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ffa6638c-aaa0-418b-ad22-e5532ae16f68-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-24s7d\" (UID: \"ffa6638c-aaa0-418b-ad22-e5532ae16f68\") " pod="openshift-marketplace/marketplace-operator-79b997595-24s7d" Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.499896 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h92f\" (UniqueName: \"kubernetes.io/projected/ffa6638c-aaa0-418b-ad22-e5532ae16f68-kube-api-access-4h92f\") pod \"marketplace-operator-79b997595-24s7d\" (UID: \"ffa6638c-aaa0-418b-ad22-e5532ae16f68\") " pod="openshift-marketplace/marketplace-operator-79b997595-24s7d" Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.601482 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ffa6638c-aaa0-418b-ad22-e5532ae16f68-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-24s7d\" (UID: \"ffa6638c-aaa0-418b-ad22-e5532ae16f68\") " pod="openshift-marketplace/marketplace-operator-79b997595-24s7d" Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.601526 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ffa6638c-aaa0-418b-ad22-e5532ae16f68-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-24s7d\" (UID: \"ffa6638c-aaa0-418b-ad22-e5532ae16f68\") " pod="openshift-marketplace/marketplace-operator-79b997595-24s7d" Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.601554 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4h92f\" (UniqueName: \"kubernetes.io/projected/ffa6638c-aaa0-418b-ad22-e5532ae16f68-kube-api-access-4h92f\") pod \"marketplace-operator-79b997595-24s7d\" (UID: \"ffa6638c-aaa0-418b-ad22-e5532ae16f68\") " pod="openshift-marketplace/marketplace-operator-79b997595-24s7d" Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.602999 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ffa6638c-aaa0-418b-ad22-e5532ae16f68-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-24s7d\" (UID: \"ffa6638c-aaa0-418b-ad22-e5532ae16f68\") " pod="openshift-marketplace/marketplace-operator-79b997595-24s7d" Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.609635 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/ffa6638c-aaa0-418b-ad22-e5532ae16f68-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-24s7d\" (UID: \"ffa6638c-aaa0-418b-ad22-e5532ae16f68\") " pod="openshift-marketplace/marketplace-operator-79b997595-24s7d" Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.623790 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4h92f\" (UniqueName: \"kubernetes.io/projected/ffa6638c-aaa0-418b-ad22-e5532ae16f68-kube-api-access-4h92f\") pod \"marketplace-operator-79b997595-24s7d\" (UID: \"ffa6638c-aaa0-418b-ad22-e5532ae16f68\") " pod="openshift-marketplace/marketplace-operator-79b997595-24s7d" Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.798313 4775 generic.go:334] "Generic (PLEG): container finished" podID="e59d5724-424f-4151-98a4-c2cfa3918ac0" containerID="706b207c906b11477ffafcc96a740d5e3fd0c32011317bda62a73b4005aa1b8f" exitCode=0 Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.798390 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q6l68" event={"ID":"e59d5724-424f-4151-98a4-c2cfa3918ac0","Type":"ContainerDied","Data":"706b207c906b11477ffafcc96a740d5e3fd0c32011317bda62a73b4005aa1b8f"} Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.800336 4775 generic.go:334] "Generic (PLEG): container finished" podID="1b219edd-2ebd-4968-b427-ec555eade68c" containerID="5d5b3239c4354bbf8668793adb57fca35d10a6d969fbc9bd29c2463925617ab2" exitCode=0 Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.800395 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-285dn" event={"ID":"1b219edd-2ebd-4968-b427-ec555eade68c","Type":"ContainerDied","Data":"5d5b3239c4354bbf8668793adb57fca35d10a6d969fbc9bd29c2463925617ab2"} Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.802241 4775 generic.go:334] "Generic (PLEG): container finished" podID="8bb5169a-229e-4d38-beea-4783c11d0098" containerID="c7260cd3d625fa792d5d94bcaae087826a69b9166dd1b6258fd35d2e1bd77b66" exitCode=0 Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.802289 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2q2jj" event={"ID":"8bb5169a-229e-4d38-beea-4783c11d0098","Type":"ContainerDied","Data":"c7260cd3d625fa792d5d94bcaae087826a69b9166dd1b6258fd35d2e1bd77b66"} Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.803607 4775 generic.go:334] "Generic (PLEG): container finished" podID="8ac48e42-bde7-4701-b994-825906603b06" containerID="f51d1a8b2d530002962d11af10b4a9dc9403d48b6849c26ac64175b119f21f51" exitCode=0 Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.803670 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" event={"ID":"8ac48e42-bde7-4701-b994-825906603b06","Type":"ContainerDied","Data":"f51d1a8b2d530002962d11af10b4a9dc9403d48b6849c26ac64175b119f21f51"} Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.805716 4775 generic.go:334] "Generic (PLEG): container finished" podID="0e3253a9-fac0-401c-8e02-52758dbc40f3" containerID="d42ef899e57f6183a5f1a3a8ba0663646429d61c6d74c35df738852826152a1c" exitCode=0 Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.805756 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-84gx7" event={"ID":"0e3253a9-fac0-401c-8e02-52758dbc40f3","Type":"ContainerDied","Data":"d42ef899e57f6183a5f1a3a8ba0663646429d61c6d74c35df738852826152a1c"} Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.871726 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-24s7d" Jan 23 14:10:50 crc kubenswrapper[4775]: I0123 14:10:50.874326 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-285dn" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.014309 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.016652 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnxtf\" (UniqueName: \"kubernetes.io/projected/1b219edd-2ebd-4968-b427-ec555eade68c-kube-api-access-vnxtf\") pod \"1b219edd-2ebd-4968-b427-ec555eade68c\" (UID: \"1b219edd-2ebd-4968-b427-ec555eade68c\") " Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.016692 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b219edd-2ebd-4968-b427-ec555eade68c-utilities\") pod \"1b219edd-2ebd-4968-b427-ec555eade68c\" (UID: \"1b219edd-2ebd-4968-b427-ec555eade68c\") " Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.016730 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b219edd-2ebd-4968-b427-ec555eade68c-catalog-content\") pod \"1b219edd-2ebd-4968-b427-ec555eade68c\" (UID: \"1b219edd-2ebd-4968-b427-ec555eade68c\") " Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.017637 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b219edd-2ebd-4968-b427-ec555eade68c-utilities" (OuterVolumeSpecName: "utilities") pod "1b219edd-2ebd-4968-b427-ec555eade68c" (UID: "1b219edd-2ebd-4968-b427-ec555eade68c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.024710 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2q2jj" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.027243 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b219edd-2ebd-4968-b427-ec555eade68c-kube-api-access-vnxtf" (OuterVolumeSpecName: "kube-api-access-vnxtf") pod "1b219edd-2ebd-4968-b427-ec555eade68c" (UID: "1b219edd-2ebd-4968-b427-ec555eade68c"). InnerVolumeSpecName "kube-api-access-vnxtf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.069476 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q6l68" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.077474 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-84gx7" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.083747 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b219edd-2ebd-4968-b427-ec555eade68c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b219edd-2ebd-4968-b427-ec555eade68c" (UID: "1b219edd-2ebd-4968-b427-ec555eade68c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.118376 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8ac48e42-bde7-4701-b994-825906603b06-marketplace-operator-metrics\") pod \"8ac48e42-bde7-4701-b994-825906603b06\" (UID: \"8ac48e42-bde7-4701-b994-825906603b06\") " Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.118451 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bb5169a-229e-4d38-beea-4783c11d0098-utilities\") pod \"8bb5169a-229e-4d38-beea-4783c11d0098\" (UID: \"8bb5169a-229e-4d38-beea-4783c11d0098\") " Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.118510 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2lfm\" (UniqueName: \"kubernetes.io/projected/8bb5169a-229e-4d38-beea-4783c11d0098-kube-api-access-f2lfm\") pod \"8bb5169a-229e-4d38-beea-4783c11d0098\" (UID: \"8bb5169a-229e-4d38-beea-4783c11d0098\") " Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.118623 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwv8t\" (UniqueName: \"kubernetes.io/projected/8ac48e42-bde7-4701-b994-825906603b06-kube-api-access-bwv8t\") pod \"8ac48e42-bde7-4701-b994-825906603b06\" (UID: \"8ac48e42-bde7-4701-b994-825906603b06\") " Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.118667 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bb5169a-229e-4d38-beea-4783c11d0098-catalog-content\") pod \"8bb5169a-229e-4d38-beea-4783c11d0098\" (UID: \"8bb5169a-229e-4d38-beea-4783c11d0098\") " Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.119601 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ac48e42-bde7-4701-b994-825906603b06-marketplace-trusted-ca\") pod \"8ac48e42-bde7-4701-b994-825906603b06\" (UID: \"8ac48e42-bde7-4701-b994-825906603b06\") " Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.119677 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bb5169a-229e-4d38-beea-4783c11d0098-utilities" (OuterVolumeSpecName: "utilities") pod "8bb5169a-229e-4d38-beea-4783c11d0098" (UID: "8bb5169a-229e-4d38-beea-4783c11d0098"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.120121 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bb5169a-229e-4d38-beea-4783c11d0098-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.120166 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b219edd-2ebd-4968-b427-ec555eade68c-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.120179 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b219edd-2ebd-4968-b427-ec555eade68c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.120186 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ac48e42-bde7-4701-b994-825906603b06-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "8ac48e42-bde7-4701-b994-825906603b06" (UID: "8ac48e42-bde7-4701-b994-825906603b06"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.120196 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnxtf\" (UniqueName: \"kubernetes.io/projected/1b219edd-2ebd-4968-b427-ec555eade68c-kube-api-access-vnxtf\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.121018 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ac48e42-bde7-4701-b994-825906603b06-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "8ac48e42-bde7-4701-b994-825906603b06" (UID: "8ac48e42-bde7-4701-b994-825906603b06"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.122197 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bb5169a-229e-4d38-beea-4783c11d0098-kube-api-access-f2lfm" (OuterVolumeSpecName: "kube-api-access-f2lfm") pod "8bb5169a-229e-4d38-beea-4783c11d0098" (UID: "8bb5169a-229e-4d38-beea-4783c11d0098"). InnerVolumeSpecName "kube-api-access-f2lfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.124099 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ac48e42-bde7-4701-b994-825906603b06-kube-api-access-bwv8t" (OuterVolumeSpecName: "kube-api-access-bwv8t") pod "8ac48e42-bde7-4701-b994-825906603b06" (UID: "8ac48e42-bde7-4701-b994-825906603b06"). InnerVolumeSpecName "kube-api-access-bwv8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.202173 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bb5169a-229e-4d38-beea-4783c11d0098-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8bb5169a-229e-4d38-beea-4783c11d0098" (UID: "8bb5169a-229e-4d38-beea-4783c11d0098"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.221380 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e3253a9-fac0-401c-8e02-52758dbc40f3-utilities\") pod \"0e3253a9-fac0-401c-8e02-52758dbc40f3\" (UID: \"0e3253a9-fac0-401c-8e02-52758dbc40f3\") " Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.221468 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phm66\" (UniqueName: \"kubernetes.io/projected/e59d5724-424f-4151-98a4-c2cfa3918ac0-kube-api-access-phm66\") pod \"e59d5724-424f-4151-98a4-c2cfa3918ac0\" (UID: \"e59d5724-424f-4151-98a4-c2cfa3918ac0\") " Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.221542 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2k5h\" (UniqueName: \"kubernetes.io/projected/0e3253a9-fac0-401c-8e02-52758dbc40f3-kube-api-access-h2k5h\") pod \"0e3253a9-fac0-401c-8e02-52758dbc40f3\" (UID: \"0e3253a9-fac0-401c-8e02-52758dbc40f3\") " Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.221559 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e59d5724-424f-4151-98a4-c2cfa3918ac0-catalog-content\") pod \"e59d5724-424f-4151-98a4-c2cfa3918ac0\" (UID: \"e59d5724-424f-4151-98a4-c2cfa3918ac0\") " Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.221575 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e3253a9-fac0-401c-8e02-52758dbc40f3-catalog-content\") pod \"0e3253a9-fac0-401c-8e02-52758dbc40f3\" (UID: \"0e3253a9-fac0-401c-8e02-52758dbc40f3\") " Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.221665 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e59d5724-424f-4151-98a4-c2cfa3918ac0-utilities\") pod \"e59d5724-424f-4151-98a4-c2cfa3918ac0\" (UID: \"e59d5724-424f-4151-98a4-c2cfa3918ac0\") " Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.221894 4775 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ac48e42-bde7-4701-b994-825906603b06-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.221912 4775 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8ac48e42-bde7-4701-b994-825906603b06-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.221922 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2lfm\" (UniqueName: \"kubernetes.io/projected/8bb5169a-229e-4d38-beea-4783c11d0098-kube-api-access-f2lfm\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.221932 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwv8t\" (UniqueName: \"kubernetes.io/projected/8ac48e42-bde7-4701-b994-825906603b06-kube-api-access-bwv8t\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.221941 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bb5169a-229e-4d38-beea-4783c11d0098-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.222307 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e3253a9-fac0-401c-8e02-52758dbc40f3-utilities" (OuterVolumeSpecName: "utilities") pod "0e3253a9-fac0-401c-8e02-52758dbc40f3" (UID: "0e3253a9-fac0-401c-8e02-52758dbc40f3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.222853 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e59d5724-424f-4151-98a4-c2cfa3918ac0-utilities" (OuterVolumeSpecName: "utilities") pod "e59d5724-424f-4151-98a4-c2cfa3918ac0" (UID: "e59d5724-424f-4151-98a4-c2cfa3918ac0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.225163 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e3253a9-fac0-401c-8e02-52758dbc40f3-kube-api-access-h2k5h" (OuterVolumeSpecName: "kube-api-access-h2k5h") pod "0e3253a9-fac0-401c-8e02-52758dbc40f3" (UID: "0e3253a9-fac0-401c-8e02-52758dbc40f3"). InnerVolumeSpecName "kube-api-access-h2k5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.225644 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e59d5724-424f-4151-98a4-c2cfa3918ac0-kube-api-access-phm66" (OuterVolumeSpecName: "kube-api-access-phm66") pod "e59d5724-424f-4151-98a4-c2cfa3918ac0" (UID: "e59d5724-424f-4151-98a4-c2cfa3918ac0"). InnerVolumeSpecName "kube-api-access-phm66". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.251461 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e59d5724-424f-4151-98a4-c2cfa3918ac0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e59d5724-424f-4151-98a4-c2cfa3918ac0" (UID: "e59d5724-424f-4151-98a4-c2cfa3918ac0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.323557 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e59d5724-424f-4151-98a4-c2cfa3918ac0-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.323609 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e3253a9-fac0-401c-8e02-52758dbc40f3-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.323620 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phm66\" (UniqueName: \"kubernetes.io/projected/e59d5724-424f-4151-98a4-c2cfa3918ac0-kube-api-access-phm66\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.323631 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2k5h\" (UniqueName: \"kubernetes.io/projected/0e3253a9-fac0-401c-8e02-52758dbc40f3-kube-api-access-h2k5h\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.323641 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e59d5724-424f-4151-98a4-c2cfa3918ac0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.358579 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e3253a9-fac0-401c-8e02-52758dbc40f3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e3253a9-fac0-401c-8e02-52758dbc40f3" (UID: "0e3253a9-fac0-401c-8e02-52758dbc40f3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.407337 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-24s7d"] Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.424644 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e3253a9-fac0-401c-8e02-52758dbc40f3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.734916 4775 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-pmcq8 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.734980 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" podUID="8ac48e42-bde7-4701-b994-825906603b06" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.18:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.813091 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q6l68" event={"ID":"e59d5724-424f-4151-98a4-c2cfa3918ac0","Type":"ContainerDied","Data":"26c35738c37491d0603ee348b5fe634ea59da9d48f5e4b15355f05e6dc983614"} Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.813166 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q6l68" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.813874 4775 scope.go:117] "RemoveContainer" containerID="706b207c906b11477ffafcc96a740d5e3fd0c32011317bda62a73b4005aa1b8f" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.815571 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-285dn" event={"ID":"1b219edd-2ebd-4968-b427-ec555eade68c","Type":"ContainerDied","Data":"9a6cbd2e89e6d00653f0a6c222530e1e89b3f96e06271f5d87d7fff651ac3937"} Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.815655 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-285dn" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.819571 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2q2jj" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.819573 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2q2jj" event={"ID":"8bb5169a-229e-4d38-beea-4783c11d0098","Type":"ContainerDied","Data":"3666244710ce45438b030ced5df57918d02f4be6ca49d93c06949ae50a2a548e"} Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.821121 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" event={"ID":"8ac48e42-bde7-4701-b994-825906603b06","Type":"ContainerDied","Data":"14f4d6283aff6de605f724a865763d27a0a448211bbacd5d102fb5562e6f44ef"} Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.821215 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-pmcq8" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.823397 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-84gx7" event={"ID":"0e3253a9-fac0-401c-8e02-52758dbc40f3","Type":"ContainerDied","Data":"15af52003ac596b61d4d000ce7f453341ef0c574add7e4ae39f4de44a23d82f4"} Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.823451 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-84gx7" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.825747 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-24s7d" event={"ID":"ffa6638c-aaa0-418b-ad22-e5532ae16f68","Type":"ContainerStarted","Data":"f94ffef5cffeb674ca96c38d4958c9570180370c8682636ebe6553c7d4d8066d"} Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.825772 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-24s7d" event={"ID":"ffa6638c-aaa0-418b-ad22-e5532ae16f68","Type":"ContainerStarted","Data":"308617b5db3c6fab4969661f3b3eff4fa11db923b836f0b38c2f7187515fce6f"} Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.825988 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-24s7d" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.827882 4775 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-24s7d container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.69:8080/healthz\": dial tcp 10.217.0.69:8080: connect: connection refused" start-of-body= Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.827949 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-24s7d" podUID="ffa6638c-aaa0-418b-ad22-e5532ae16f68" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.69:8080/healthz\": dial tcp 10.217.0.69:8080: connect: connection refused" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.833195 4775 scope.go:117] "RemoveContainer" containerID="cfd053c22baaf71bc6e6f5aaf2077bc268a3849c132a7cf71ad6b25d80b48bc6" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.844681 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q6l68"] Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.856265 4775 scope.go:117] "RemoveContainer" containerID="b99c9f768aa87908f3ac8df6adf51f693264f7a4696b77a222908931aa45eca9" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.856441 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q6l68"] Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.877704 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-24s7d" podStartSLOduration=1.877679533 podStartE2EDuration="1.877679533s" podCreationTimestamp="2026-01-23 14:10:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:10:51.865586125 +0000 UTC m=+398.860414865" watchObservedRunningTime="2026-01-23 14:10:51.877679533 +0000 UTC m=+398.872508273" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.881561 4775 scope.go:117] "RemoveContainer" containerID="5d5b3239c4354bbf8668793adb57fca35d10a6d969fbc9bd29c2463925617ab2" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.886044 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pmcq8"] Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.890094 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-pmcq8"] Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.896657 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-84gx7"] Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.901849 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-84gx7"] Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.903781 4775 scope.go:117] "RemoveContainer" containerID="b1229993babbc54c28d7f94650301e60c409ed8c65f3e43af5dfec3a30554ce5" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.910933 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-285dn"] Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.914014 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-285dn"] Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.929764 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2q2jj"] Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.933748 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2q2jj"] Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.941762 4775 scope.go:117] "RemoveContainer" containerID="1dfa5709162617f477770a0c1b0ee689961a84471dd689b9f7007baa498421fb" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.958029 4775 scope.go:117] "RemoveContainer" containerID="c7260cd3d625fa792d5d94bcaae087826a69b9166dd1b6258fd35d2e1bd77b66" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.979308 4775 scope.go:117] "RemoveContainer" containerID="e563f1706af6b75f9ac6731329cafb2b41d302473241046df0512766a2019809" Jan 23 14:10:51 crc kubenswrapper[4775]: I0123 14:10:51.994536 4775 scope.go:117] "RemoveContainer" containerID="c0baa5a93e54c6225c779b90a89902f01c5bdd44c7fddb995bab3ef18e6ecb5f" Jan 23 14:10:52 crc kubenswrapper[4775]: I0123 14:10:52.009011 4775 scope.go:117] "RemoveContainer" containerID="f51d1a8b2d530002962d11af10b4a9dc9403d48b6849c26ac64175b119f21f51" Jan 23 14:10:52 crc kubenswrapper[4775]: I0123 14:10:52.021921 4775 scope.go:117] "RemoveContainer" containerID="d42ef899e57f6183a5f1a3a8ba0663646429d61c6d74c35df738852826152a1c" Jan 23 14:10:52 crc kubenswrapper[4775]: I0123 14:10:52.040060 4775 scope.go:117] "RemoveContainer" containerID="5650f2902470285f87f0519671b820000e9540073b92320e14586d65634addb8" Jan 23 14:10:52 crc kubenswrapper[4775]: I0123 14:10:52.056132 4775 scope.go:117] "RemoveContainer" containerID="33e54abbac164ceea7f804e54924e8f9324295ef8959032204bb2d352664a565" Jan 23 14:10:52 crc kubenswrapper[4775]: I0123 14:10:52.843343 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-24s7d" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.169860 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bb2pb"] Jan 23 14:10:53 crc kubenswrapper[4775]: E0123 14:10:53.170614 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b219edd-2ebd-4968-b427-ec555eade68c" containerName="extract-utilities" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.170754 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b219edd-2ebd-4968-b427-ec555eade68c" containerName="extract-utilities" Jan 23 14:10:53 crc kubenswrapper[4775]: E0123 14:10:53.170908 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b219edd-2ebd-4968-b427-ec555eade68c" containerName="extract-content" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.171069 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b219edd-2ebd-4968-b427-ec555eade68c" containerName="extract-content" Jan 23 14:10:53 crc kubenswrapper[4775]: E0123 14:10:53.171198 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e59d5724-424f-4151-98a4-c2cfa3918ac0" containerName="extract-utilities" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.171311 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="e59d5724-424f-4151-98a4-c2cfa3918ac0" containerName="extract-utilities" Jan 23 14:10:53 crc kubenswrapper[4775]: E0123 14:10:53.171426 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bb5169a-229e-4d38-beea-4783c11d0098" containerName="registry-server" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.171536 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bb5169a-229e-4d38-beea-4783c11d0098" containerName="registry-server" Jan 23 14:10:53 crc kubenswrapper[4775]: E0123 14:10:53.171652 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e59d5724-424f-4151-98a4-c2cfa3918ac0" containerName="extract-content" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.171776 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="e59d5724-424f-4151-98a4-c2cfa3918ac0" containerName="extract-content" Jan 23 14:10:53 crc kubenswrapper[4775]: E0123 14:10:53.171946 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e3253a9-fac0-401c-8e02-52758dbc40f3" containerName="registry-server" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.172061 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e3253a9-fac0-401c-8e02-52758dbc40f3" containerName="registry-server" Jan 23 14:10:53 crc kubenswrapper[4775]: E0123 14:10:53.172174 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e3253a9-fac0-401c-8e02-52758dbc40f3" containerName="extract-utilities" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.172295 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e3253a9-fac0-401c-8e02-52758dbc40f3" containerName="extract-utilities" Jan 23 14:10:53 crc kubenswrapper[4775]: E0123 14:10:53.172412 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bb5169a-229e-4d38-beea-4783c11d0098" containerName="extract-utilities" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.172566 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bb5169a-229e-4d38-beea-4783c11d0098" containerName="extract-utilities" Jan 23 14:10:53 crc kubenswrapper[4775]: E0123 14:10:53.172898 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bb5169a-229e-4d38-beea-4783c11d0098" containerName="extract-content" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.174191 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bb5169a-229e-4d38-beea-4783c11d0098" containerName="extract-content" Jan 23 14:10:53 crc kubenswrapper[4775]: E0123 14:10:53.174334 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b219edd-2ebd-4968-b427-ec555eade68c" containerName="registry-server" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.174469 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b219edd-2ebd-4968-b427-ec555eade68c" containerName="registry-server" Jan 23 14:10:53 crc kubenswrapper[4775]: E0123 14:10:53.174599 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e59d5724-424f-4151-98a4-c2cfa3918ac0" containerName="registry-server" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.174992 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="e59d5724-424f-4151-98a4-c2cfa3918ac0" containerName="registry-server" Jan 23 14:10:53 crc kubenswrapper[4775]: E0123 14:10:53.175260 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e3253a9-fac0-401c-8e02-52758dbc40f3" containerName="extract-content" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.175454 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e3253a9-fac0-401c-8e02-52758dbc40f3" containerName="extract-content" Jan 23 14:10:53 crc kubenswrapper[4775]: E0123 14:10:53.175622 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ac48e42-bde7-4701-b994-825906603b06" containerName="marketplace-operator" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.175742 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ac48e42-bde7-4701-b994-825906603b06" containerName="marketplace-operator" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.176072 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bb5169a-229e-4d38-beea-4783c11d0098" containerName="registry-server" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.176231 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e3253a9-fac0-401c-8e02-52758dbc40f3" containerName="registry-server" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.176353 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="e59d5724-424f-4151-98a4-c2cfa3918ac0" containerName="registry-server" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.176651 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ac48e42-bde7-4701-b994-825906603b06" containerName="marketplace-operator" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.176790 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b219edd-2ebd-4968-b427-ec555eade68c" containerName="registry-server" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.178048 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bb2pb"] Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.178247 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bb2pb" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.182681 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.218536 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.218589 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.218626 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.219307 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"64681a72387a3235a4c6d3370b32de4e57c80d8102b47cdde5e10511ccb7381b"} pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.219373 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" containerID="cri-o://64681a72387a3235a4c6d3370b32de4e57c80d8102b47cdde5e10511ccb7381b" gracePeriod=600 Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.254366 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5-utilities\") pod \"certified-operators-bb2pb\" (UID: \"d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5\") " pod="openshift-marketplace/certified-operators-bb2pb" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.254416 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5-catalog-content\") pod \"certified-operators-bb2pb\" (UID: \"d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5\") " pod="openshift-marketplace/certified-operators-bb2pb" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.254461 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkqcd\" (UniqueName: \"kubernetes.io/projected/d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5-kube-api-access-jkqcd\") pod \"certified-operators-bb2pb\" (UID: \"d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5\") " pod="openshift-marketplace/certified-operators-bb2pb" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.355227 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5-utilities\") pod \"certified-operators-bb2pb\" (UID: \"d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5\") " pod="openshift-marketplace/certified-operators-bb2pb" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.355296 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5-catalog-content\") pod \"certified-operators-bb2pb\" (UID: \"d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5\") " pod="openshift-marketplace/certified-operators-bb2pb" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.355352 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkqcd\" (UniqueName: \"kubernetes.io/projected/d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5-kube-api-access-jkqcd\") pod \"certified-operators-bb2pb\" (UID: \"d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5\") " pod="openshift-marketplace/certified-operators-bb2pb" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.356145 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5-utilities\") pod \"certified-operators-bb2pb\" (UID: \"d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5\") " pod="openshift-marketplace/certified-operators-bb2pb" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.356393 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5-catalog-content\") pod \"certified-operators-bb2pb\" (UID: \"d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5\") " pod="openshift-marketplace/certified-operators-bb2pb" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.380229 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkqcd\" (UniqueName: \"kubernetes.io/projected/d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5-kube-api-access-jkqcd\") pod \"certified-operators-bb2pb\" (UID: \"d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5\") " pod="openshift-marketplace/certified-operators-bb2pb" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.498452 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bb2pb" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.727896 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e3253a9-fac0-401c-8e02-52758dbc40f3" path="/var/lib/kubelet/pods/0e3253a9-fac0-401c-8e02-52758dbc40f3/volumes" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.729179 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b219edd-2ebd-4968-b427-ec555eade68c" path="/var/lib/kubelet/pods/1b219edd-2ebd-4968-b427-ec555eade68c/volumes" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.735611 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ac48e42-bde7-4701-b994-825906603b06" path="/var/lib/kubelet/pods/8ac48e42-bde7-4701-b994-825906603b06/volumes" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.736783 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bb5169a-229e-4d38-beea-4783c11d0098" path="/var/lib/kubelet/pods/8bb5169a-229e-4d38-beea-4783c11d0098/volumes" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.739073 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e59d5724-424f-4151-98a4-c2cfa3918ac0" path="/var/lib/kubelet/pods/e59d5724-424f-4151-98a4-c2cfa3918ac0/volumes" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.846083 4775 generic.go:334] "Generic (PLEG): container finished" podID="4fea0767-0566-4214-855d-ed0373946271" containerID="64681a72387a3235a4c6d3370b32de4e57c80d8102b47cdde5e10511ccb7381b" exitCode=0 Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.846143 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerDied","Data":"64681a72387a3235a4c6d3370b32de4e57c80d8102b47cdde5e10511ccb7381b"} Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.846280 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerStarted","Data":"3a30391cad6397529420dfc5378ada691294f3663e7d36abc04ee2debc01dfeb"} Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.846306 4775 scope.go:117] "RemoveContainer" containerID="69c7397026314cee652c2eda6c2c79bc111cd330ec7e40845f857e3ac91c3f8d" Jan 23 14:10:53 crc kubenswrapper[4775]: I0123 14:10:53.929968 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bb2pb"] Jan 23 14:10:53 crc kubenswrapper[4775]: W0123 14:10:53.937526 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9f7bf95_e60c_4dbb_bb9b_0a7c038871f5.slice/crio-7022b882659a424c53c12dd0ac5418bbf9f51d0c15e9e27534d9a4ffff36d4ed WatchSource:0}: Error finding container 7022b882659a424c53c12dd0ac5418bbf9f51d0c15e9e27534d9a4ffff36d4ed: Status 404 returned error can't find the container with id 7022b882659a424c53c12dd0ac5418bbf9f51d0c15e9e27534d9a4ffff36d4ed Jan 23 14:10:54 crc kubenswrapper[4775]: I0123 14:10:54.164101 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sx4qm"] Jan 23 14:10:54 crc kubenswrapper[4775]: I0123 14:10:54.165150 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sx4qm" Jan 23 14:10:54 crc kubenswrapper[4775]: I0123 14:10:54.170632 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 23 14:10:54 crc kubenswrapper[4775]: I0123 14:10:54.178532 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sx4qm"] Jan 23 14:10:54 crc kubenswrapper[4775]: I0123 14:10:54.265400 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c94dee4-8e79-4f60-a8b9-2c1f33490ba7-utilities\") pod \"redhat-operators-sx4qm\" (UID: \"0c94dee4-8e79-4f60-a8b9-2c1f33490ba7\") " pod="openshift-marketplace/redhat-operators-sx4qm" Jan 23 14:10:54 crc kubenswrapper[4775]: I0123 14:10:54.265457 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c94dee4-8e79-4f60-a8b9-2c1f33490ba7-catalog-content\") pod \"redhat-operators-sx4qm\" (UID: \"0c94dee4-8e79-4f60-a8b9-2c1f33490ba7\") " pod="openshift-marketplace/redhat-operators-sx4qm" Jan 23 14:10:54 crc kubenswrapper[4775]: I0123 14:10:54.265508 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsfvm\" (UniqueName: \"kubernetes.io/projected/0c94dee4-8e79-4f60-a8b9-2c1f33490ba7-kube-api-access-xsfvm\") pod \"redhat-operators-sx4qm\" (UID: \"0c94dee4-8e79-4f60-a8b9-2c1f33490ba7\") " pod="openshift-marketplace/redhat-operators-sx4qm" Jan 23 14:10:54 crc kubenswrapper[4775]: I0123 14:10:54.367136 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c94dee4-8e79-4f60-a8b9-2c1f33490ba7-catalog-content\") pod \"redhat-operators-sx4qm\" (UID: \"0c94dee4-8e79-4f60-a8b9-2c1f33490ba7\") " pod="openshift-marketplace/redhat-operators-sx4qm" Jan 23 14:10:54 crc kubenswrapper[4775]: I0123 14:10:54.367476 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsfvm\" (UniqueName: \"kubernetes.io/projected/0c94dee4-8e79-4f60-a8b9-2c1f33490ba7-kube-api-access-xsfvm\") pod \"redhat-operators-sx4qm\" (UID: \"0c94dee4-8e79-4f60-a8b9-2c1f33490ba7\") " pod="openshift-marketplace/redhat-operators-sx4qm" Jan 23 14:10:54 crc kubenswrapper[4775]: I0123 14:10:54.367657 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c94dee4-8e79-4f60-a8b9-2c1f33490ba7-utilities\") pod \"redhat-operators-sx4qm\" (UID: \"0c94dee4-8e79-4f60-a8b9-2c1f33490ba7\") " pod="openshift-marketplace/redhat-operators-sx4qm" Jan 23 14:10:54 crc kubenswrapper[4775]: I0123 14:10:54.368028 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c94dee4-8e79-4f60-a8b9-2c1f33490ba7-catalog-content\") pod \"redhat-operators-sx4qm\" (UID: \"0c94dee4-8e79-4f60-a8b9-2c1f33490ba7\") " pod="openshift-marketplace/redhat-operators-sx4qm" Jan 23 14:10:54 crc kubenswrapper[4775]: I0123 14:10:54.368590 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c94dee4-8e79-4f60-a8b9-2c1f33490ba7-utilities\") pod \"redhat-operators-sx4qm\" (UID: \"0c94dee4-8e79-4f60-a8b9-2c1f33490ba7\") " pod="openshift-marketplace/redhat-operators-sx4qm" Jan 23 14:10:54 crc kubenswrapper[4775]: I0123 14:10:54.395000 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsfvm\" (UniqueName: \"kubernetes.io/projected/0c94dee4-8e79-4f60-a8b9-2c1f33490ba7-kube-api-access-xsfvm\") pod \"redhat-operators-sx4qm\" (UID: \"0c94dee4-8e79-4f60-a8b9-2c1f33490ba7\") " pod="openshift-marketplace/redhat-operators-sx4qm" Jan 23 14:10:54 crc kubenswrapper[4775]: I0123 14:10:54.497254 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sx4qm" Jan 23 14:10:54 crc kubenswrapper[4775]: I0123 14:10:54.855978 4775 generic.go:334] "Generic (PLEG): container finished" podID="d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5" containerID="9989f4fe62a0e1a80697783a84696d42ebb144b7ea9072d980c54c388c525362" exitCode=0 Jan 23 14:10:54 crc kubenswrapper[4775]: I0123 14:10:54.856072 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bb2pb" event={"ID":"d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5","Type":"ContainerDied","Data":"9989f4fe62a0e1a80697783a84696d42ebb144b7ea9072d980c54c388c525362"} Jan 23 14:10:54 crc kubenswrapper[4775]: I0123 14:10:54.856139 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bb2pb" event={"ID":"d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5","Type":"ContainerStarted","Data":"7022b882659a424c53c12dd0ac5418bbf9f51d0c15e9e27534d9a4ffff36d4ed"} Jan 23 14:10:54 crc kubenswrapper[4775]: I0123 14:10:54.967417 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sx4qm"] Jan 23 14:10:55 crc kubenswrapper[4775]: I0123 14:10:55.569404 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8jjcj"] Jan 23 14:10:55 crc kubenswrapper[4775]: I0123 14:10:55.570686 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jjcj" Jan 23 14:10:55 crc kubenswrapper[4775]: I0123 14:10:55.576329 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 23 14:10:55 crc kubenswrapper[4775]: I0123 14:10:55.579435 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8jjcj"] Jan 23 14:10:55 crc kubenswrapper[4775]: I0123 14:10:55.685546 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bggxz\" (UniqueName: \"kubernetes.io/projected/ed5c162e-62a9-4760-b5e0-a249a70225a0-kube-api-access-bggxz\") pod \"community-operators-8jjcj\" (UID: \"ed5c162e-62a9-4760-b5e0-a249a70225a0\") " pod="openshift-marketplace/community-operators-8jjcj" Jan 23 14:10:55 crc kubenswrapper[4775]: I0123 14:10:55.685638 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed5c162e-62a9-4760-b5e0-a249a70225a0-utilities\") pod \"community-operators-8jjcj\" (UID: \"ed5c162e-62a9-4760-b5e0-a249a70225a0\") " pod="openshift-marketplace/community-operators-8jjcj" Jan 23 14:10:55 crc kubenswrapper[4775]: I0123 14:10:55.685900 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed5c162e-62a9-4760-b5e0-a249a70225a0-catalog-content\") pod \"community-operators-8jjcj\" (UID: \"ed5c162e-62a9-4760-b5e0-a249a70225a0\") " pod="openshift-marketplace/community-operators-8jjcj" Jan 23 14:10:55 crc kubenswrapper[4775]: I0123 14:10:55.787035 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed5c162e-62a9-4760-b5e0-a249a70225a0-catalog-content\") pod \"community-operators-8jjcj\" (UID: \"ed5c162e-62a9-4760-b5e0-a249a70225a0\") " pod="openshift-marketplace/community-operators-8jjcj" Jan 23 14:10:55 crc kubenswrapper[4775]: I0123 14:10:55.787433 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bggxz\" (UniqueName: \"kubernetes.io/projected/ed5c162e-62a9-4760-b5e0-a249a70225a0-kube-api-access-bggxz\") pod \"community-operators-8jjcj\" (UID: \"ed5c162e-62a9-4760-b5e0-a249a70225a0\") " pod="openshift-marketplace/community-operators-8jjcj" Jan 23 14:10:55 crc kubenswrapper[4775]: I0123 14:10:55.787462 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed5c162e-62a9-4760-b5e0-a249a70225a0-utilities\") pod \"community-operators-8jjcj\" (UID: \"ed5c162e-62a9-4760-b5e0-a249a70225a0\") " pod="openshift-marketplace/community-operators-8jjcj" Jan 23 14:10:55 crc kubenswrapper[4775]: I0123 14:10:55.787663 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed5c162e-62a9-4760-b5e0-a249a70225a0-catalog-content\") pod \"community-operators-8jjcj\" (UID: \"ed5c162e-62a9-4760-b5e0-a249a70225a0\") " pod="openshift-marketplace/community-operators-8jjcj" Jan 23 14:10:55 crc kubenswrapper[4775]: I0123 14:10:55.787852 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed5c162e-62a9-4760-b5e0-a249a70225a0-utilities\") pod \"community-operators-8jjcj\" (UID: \"ed5c162e-62a9-4760-b5e0-a249a70225a0\") " pod="openshift-marketplace/community-operators-8jjcj" Jan 23 14:10:55 crc kubenswrapper[4775]: I0123 14:10:55.809148 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bggxz\" (UniqueName: \"kubernetes.io/projected/ed5c162e-62a9-4760-b5e0-a249a70225a0-kube-api-access-bggxz\") pod \"community-operators-8jjcj\" (UID: \"ed5c162e-62a9-4760-b5e0-a249a70225a0\") " pod="openshift-marketplace/community-operators-8jjcj" Jan 23 14:10:55 crc kubenswrapper[4775]: I0123 14:10:55.867448 4775 generic.go:334] "Generic (PLEG): container finished" podID="0c94dee4-8e79-4f60-a8b9-2c1f33490ba7" containerID="49eae8b296f7a930d3ad9eb1d232b6b88d388c8ed6fa7354489e7e0745b32b91" exitCode=0 Jan 23 14:10:55 crc kubenswrapper[4775]: I0123 14:10:55.867559 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sx4qm" event={"ID":"0c94dee4-8e79-4f60-a8b9-2c1f33490ba7","Type":"ContainerDied","Data":"49eae8b296f7a930d3ad9eb1d232b6b88d388c8ed6fa7354489e7e0745b32b91"} Jan 23 14:10:55 crc kubenswrapper[4775]: I0123 14:10:55.868494 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sx4qm" event={"ID":"0c94dee4-8e79-4f60-a8b9-2c1f33490ba7","Type":"ContainerStarted","Data":"193db82826c761a02002446dfeffdbc415e5b21166d432e69177b9b669bcaa15"} Jan 23 14:10:55 crc kubenswrapper[4775]: I0123 14:10:55.901891 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8jjcj" Jan 23 14:10:56 crc kubenswrapper[4775]: I0123 14:10:56.333539 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8jjcj"] Jan 23 14:10:56 crc kubenswrapper[4775]: W0123 14:10:56.339255 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded5c162e_62a9_4760_b5e0_a249a70225a0.slice/crio-7ff35cfb0cf5ad9ea510a287522f3050d687da21d1562f1aa925203d3b208c3b WatchSource:0}: Error finding container 7ff35cfb0cf5ad9ea510a287522f3050d687da21d1562f1aa925203d3b208c3b: Status 404 returned error can't find the container with id 7ff35cfb0cf5ad9ea510a287522f3050d687da21d1562f1aa925203d3b208c3b Jan 23 14:10:56 crc kubenswrapper[4775]: I0123 14:10:56.567673 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fxcrw"] Jan 23 14:10:56 crc kubenswrapper[4775]: I0123 14:10:56.571008 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fxcrw" Jan 23 14:10:56 crc kubenswrapper[4775]: I0123 14:10:56.573545 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 23 14:10:56 crc kubenswrapper[4775]: I0123 14:10:56.574262 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fxcrw"] Jan 23 14:10:56 crc kubenswrapper[4775]: I0123 14:10:56.700005 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4x5j\" (UniqueName: \"kubernetes.io/projected/39bc9387-f295-4aec-ad66-8831265c0400-kube-api-access-f4x5j\") pod \"redhat-marketplace-fxcrw\" (UID: \"39bc9387-f295-4aec-ad66-8831265c0400\") " pod="openshift-marketplace/redhat-marketplace-fxcrw" Jan 23 14:10:56 crc kubenswrapper[4775]: I0123 14:10:56.700073 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39bc9387-f295-4aec-ad66-8831265c0400-catalog-content\") pod \"redhat-marketplace-fxcrw\" (UID: \"39bc9387-f295-4aec-ad66-8831265c0400\") " pod="openshift-marketplace/redhat-marketplace-fxcrw" Jan 23 14:10:56 crc kubenswrapper[4775]: I0123 14:10:56.700117 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39bc9387-f295-4aec-ad66-8831265c0400-utilities\") pod \"redhat-marketplace-fxcrw\" (UID: \"39bc9387-f295-4aec-ad66-8831265c0400\") " pod="openshift-marketplace/redhat-marketplace-fxcrw" Jan 23 14:10:56 crc kubenswrapper[4775]: I0123 14:10:56.801755 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4x5j\" (UniqueName: \"kubernetes.io/projected/39bc9387-f295-4aec-ad66-8831265c0400-kube-api-access-f4x5j\") pod \"redhat-marketplace-fxcrw\" (UID: \"39bc9387-f295-4aec-ad66-8831265c0400\") " pod="openshift-marketplace/redhat-marketplace-fxcrw" Jan 23 14:10:56 crc kubenswrapper[4775]: I0123 14:10:56.801881 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39bc9387-f295-4aec-ad66-8831265c0400-catalog-content\") pod \"redhat-marketplace-fxcrw\" (UID: \"39bc9387-f295-4aec-ad66-8831265c0400\") " pod="openshift-marketplace/redhat-marketplace-fxcrw" Jan 23 14:10:56 crc kubenswrapper[4775]: I0123 14:10:56.801962 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39bc9387-f295-4aec-ad66-8831265c0400-utilities\") pod \"redhat-marketplace-fxcrw\" (UID: \"39bc9387-f295-4aec-ad66-8831265c0400\") " pod="openshift-marketplace/redhat-marketplace-fxcrw" Jan 23 14:10:56 crc kubenswrapper[4775]: I0123 14:10:56.802441 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39bc9387-f295-4aec-ad66-8831265c0400-catalog-content\") pod \"redhat-marketplace-fxcrw\" (UID: \"39bc9387-f295-4aec-ad66-8831265c0400\") " pod="openshift-marketplace/redhat-marketplace-fxcrw" Jan 23 14:10:56 crc kubenswrapper[4775]: I0123 14:10:56.807022 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39bc9387-f295-4aec-ad66-8831265c0400-utilities\") pod \"redhat-marketplace-fxcrw\" (UID: \"39bc9387-f295-4aec-ad66-8831265c0400\") " pod="openshift-marketplace/redhat-marketplace-fxcrw" Jan 23 14:10:56 crc kubenswrapper[4775]: I0123 14:10:56.827664 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4x5j\" (UniqueName: \"kubernetes.io/projected/39bc9387-f295-4aec-ad66-8831265c0400-kube-api-access-f4x5j\") pod \"redhat-marketplace-fxcrw\" (UID: \"39bc9387-f295-4aec-ad66-8831265c0400\") " pod="openshift-marketplace/redhat-marketplace-fxcrw" Jan 23 14:10:56 crc kubenswrapper[4775]: I0123 14:10:56.875817 4775 generic.go:334] "Generic (PLEG): container finished" podID="ed5c162e-62a9-4760-b5e0-a249a70225a0" containerID="b1ed211a24486fc778a0c4e86d565a72a63e7d607df308faee3143b25d118281" exitCode=0 Jan 23 14:10:56 crc kubenswrapper[4775]: I0123 14:10:56.875876 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jjcj" event={"ID":"ed5c162e-62a9-4760-b5e0-a249a70225a0","Type":"ContainerDied","Data":"b1ed211a24486fc778a0c4e86d565a72a63e7d607df308faee3143b25d118281"} Jan 23 14:10:56 crc kubenswrapper[4775]: I0123 14:10:56.876068 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jjcj" event={"ID":"ed5c162e-62a9-4760-b5e0-a249a70225a0","Type":"ContainerStarted","Data":"7ff35cfb0cf5ad9ea510a287522f3050d687da21d1562f1aa925203d3b208c3b"} Jan 23 14:10:56 crc kubenswrapper[4775]: I0123 14:10:56.887129 4775 generic.go:334] "Generic (PLEG): container finished" podID="d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5" containerID="a39528741de9229819cfbf91ec99690572fd8296ff83d569dd5ae78787787e9e" exitCode=0 Jan 23 14:10:56 crc kubenswrapper[4775]: I0123 14:10:56.887219 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bb2pb" event={"ID":"d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5","Type":"ContainerDied","Data":"a39528741de9229819cfbf91ec99690572fd8296ff83d569dd5ae78787787e9e"} Jan 23 14:10:56 crc kubenswrapper[4775]: I0123 14:10:56.888010 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fxcrw" Jan 23 14:10:57 crc kubenswrapper[4775]: I0123 14:10:57.368097 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fxcrw"] Jan 23 14:10:57 crc kubenswrapper[4775]: W0123 14:10:57.383203 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39bc9387_f295_4aec_ad66_8831265c0400.slice/crio-fac8df1d9dbccf1d775642e9100fc911fdd3ff0f8ffbc284740136ee14d51f4b WatchSource:0}: Error finding container fac8df1d9dbccf1d775642e9100fc911fdd3ff0f8ffbc284740136ee14d51f4b: Status 404 returned error can't find the container with id fac8df1d9dbccf1d775642e9100fc911fdd3ff0f8ffbc284740136ee14d51f4b Jan 23 14:10:57 crc kubenswrapper[4775]: I0123 14:10:57.895007 4775 generic.go:334] "Generic (PLEG): container finished" podID="39bc9387-f295-4aec-ad66-8831265c0400" containerID="2ca374f668aa98ec92160c88d95aa0bf42cc77656b24f0b3a81251c876059f6d" exitCode=0 Jan 23 14:10:57 crc kubenswrapper[4775]: I0123 14:10:57.895306 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fxcrw" event={"ID":"39bc9387-f295-4aec-ad66-8831265c0400","Type":"ContainerDied","Data":"2ca374f668aa98ec92160c88d95aa0bf42cc77656b24f0b3a81251c876059f6d"} Jan 23 14:10:57 crc kubenswrapper[4775]: I0123 14:10:57.895332 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fxcrw" event={"ID":"39bc9387-f295-4aec-ad66-8831265c0400","Type":"ContainerStarted","Data":"fac8df1d9dbccf1d775642e9100fc911fdd3ff0f8ffbc284740136ee14d51f4b"} Jan 23 14:10:57 crc kubenswrapper[4775]: I0123 14:10:57.903435 4775 generic.go:334] "Generic (PLEG): container finished" podID="0c94dee4-8e79-4f60-a8b9-2c1f33490ba7" containerID="0e0ade394de3c5d4b6ec38f9d3ab7dec24f5000eeea85a3447b591f5dd1b8390" exitCode=0 Jan 23 14:10:57 crc kubenswrapper[4775]: I0123 14:10:57.903545 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sx4qm" event={"ID":"0c94dee4-8e79-4f60-a8b9-2c1f33490ba7","Type":"ContainerDied","Data":"0e0ade394de3c5d4b6ec38f9d3ab7dec24f5000eeea85a3447b591f5dd1b8390"} Jan 23 14:10:57 crc kubenswrapper[4775]: I0123 14:10:57.909197 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bb2pb" event={"ID":"d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5","Type":"ContainerStarted","Data":"c428da0ffb577d1a5b9dfe716486a04460434844ab13ea932f708b3e9c7dd709"} Jan 23 14:10:57 crc kubenswrapper[4775]: I0123 14:10:57.962375 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bb2pb" podStartSLOduration=2.493635352 podStartE2EDuration="4.962358308s" podCreationTimestamp="2026-01-23 14:10:53 +0000 UTC" firstStartedPulling="2026-01-23 14:10:54.858599842 +0000 UTC m=+401.853428582" lastFinishedPulling="2026-01-23 14:10:57.327322808 +0000 UTC m=+404.322151538" observedRunningTime="2026-01-23 14:10:57.959590606 +0000 UTC m=+404.954419366" watchObservedRunningTime="2026-01-23 14:10:57.962358308 +0000 UTC m=+404.957187048" Jan 23 14:10:58 crc kubenswrapper[4775]: I0123 14:10:58.928759 4775 generic.go:334] "Generic (PLEG): container finished" podID="ed5c162e-62a9-4760-b5e0-a249a70225a0" containerID="f4e7579e99a37bc77c51aec07b40d31bbecb98f6aa3e493ddef12ea82b70776e" exitCode=0 Jan 23 14:10:58 crc kubenswrapper[4775]: I0123 14:10:58.928886 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jjcj" event={"ID":"ed5c162e-62a9-4760-b5e0-a249a70225a0","Type":"ContainerDied","Data":"f4e7579e99a37bc77c51aec07b40d31bbecb98f6aa3e493ddef12ea82b70776e"} Jan 23 14:10:58 crc kubenswrapper[4775]: I0123 14:10:58.951031 4775 generic.go:334] "Generic (PLEG): container finished" podID="39bc9387-f295-4aec-ad66-8831265c0400" containerID="24a20544bd98c05044377edf9951f09561f4ecff0d2541728a199f5d87991f32" exitCode=0 Jan 23 14:10:58 crc kubenswrapper[4775]: I0123 14:10:58.951168 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fxcrw" event={"ID":"39bc9387-f295-4aec-ad66-8831265c0400","Type":"ContainerDied","Data":"24a20544bd98c05044377edf9951f09561f4ecff0d2541728a199f5d87991f32"} Jan 23 14:10:58 crc kubenswrapper[4775]: I0123 14:10:58.969253 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sx4qm" event={"ID":"0c94dee4-8e79-4f60-a8b9-2c1f33490ba7","Type":"ContainerStarted","Data":"1b2aabd99ad88932381935bc241ac835de5067e7f97071950b5736763e3d2bce"} Jan 23 14:10:58 crc kubenswrapper[4775]: I0123 14:10:58.991702 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sx4qm" podStartSLOduration=2.407028284 podStartE2EDuration="4.991678641s" podCreationTimestamp="2026-01-23 14:10:54 +0000 UTC" firstStartedPulling="2026-01-23 14:10:55.869341196 +0000 UTC m=+402.864169936" lastFinishedPulling="2026-01-23 14:10:58.453991553 +0000 UTC m=+405.448820293" observedRunningTime="2026-01-23 14:10:58.986364594 +0000 UTC m=+405.981193354" watchObservedRunningTime="2026-01-23 14:10:58.991678641 +0000 UTC m=+405.986507381" Jan 23 14:10:59 crc kubenswrapper[4775]: I0123 14:10:59.974905 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fxcrw" event={"ID":"39bc9387-f295-4aec-ad66-8831265c0400","Type":"ContainerStarted","Data":"009c56fbfdb39c22bbd66c058979011943b0707f6f91b40e918b6846e71897ff"} Jan 23 14:10:59 crc kubenswrapper[4775]: I0123 14:10:59.977844 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8jjcj" event={"ID":"ed5c162e-62a9-4760-b5e0-a249a70225a0","Type":"ContainerStarted","Data":"8c3a05ad5d8d3703f17b3d70b8b67f738074138eb987127c6dd602fb6cf5f591"} Jan 23 14:10:59 crc kubenswrapper[4775]: I0123 14:10:59.993208 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fxcrw" podStartSLOduration=2.149992542 podStartE2EDuration="3.9931894s" podCreationTimestamp="2026-01-23 14:10:56 +0000 UTC" firstStartedPulling="2026-01-23 14:10:57.897406825 +0000 UTC m=+404.892235565" lastFinishedPulling="2026-01-23 14:10:59.740603683 +0000 UTC m=+406.735432423" observedRunningTime="2026-01-23 14:10:59.990242793 +0000 UTC m=+406.985071543" watchObservedRunningTime="2026-01-23 14:10:59.9931894 +0000 UTC m=+406.988018140" Jan 23 14:11:00 crc kubenswrapper[4775]: I0123 14:11:00.014538 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8jjcj" podStartSLOduration=2.543099527 podStartE2EDuration="5.014519962s" podCreationTimestamp="2026-01-23 14:10:55 +0000 UTC" firstStartedPulling="2026-01-23 14:10:56.881753338 +0000 UTC m=+403.876582118" lastFinishedPulling="2026-01-23 14:10:59.353173813 +0000 UTC m=+406.348002553" observedRunningTime="2026-01-23 14:11:00.012013128 +0000 UTC m=+407.006841888" watchObservedRunningTime="2026-01-23 14:11:00.014519962 +0000 UTC m=+407.009348702" Jan 23 14:11:03 crc kubenswrapper[4775]: I0123 14:11:03.498785 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bb2pb" Jan 23 14:11:03 crc kubenswrapper[4775]: I0123 14:11:03.499577 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bb2pb" Jan 23 14:11:03 crc kubenswrapper[4775]: I0123 14:11:03.571751 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bb2pb" Jan 23 14:11:03 crc kubenswrapper[4775]: I0123 14:11:03.869312 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" podUID="85b405af-7314-4e53-93a5-252b69153561" containerName="registry" containerID="cri-o://4284b5552eca9842bbe2aed75c1f5823dcb142543281afc7abbca3b100b2fc8e" gracePeriod=30 Jan 23 14:11:04 crc kubenswrapper[4775]: I0123 14:11:04.047672 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bb2pb" Jan 23 14:11:04 crc kubenswrapper[4775]: I0123 14:11:04.497587 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sx4qm" Jan 23 14:11:04 crc kubenswrapper[4775]: I0123 14:11:04.497635 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sx4qm" Jan 23 14:11:04 crc kubenswrapper[4775]: I0123 14:11:04.828678 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:11:04 crc kubenswrapper[4775]: I0123 14:11:04.921861 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/85b405af-7314-4e53-93a5-252b69153561-registry-tls\") pod \"85b405af-7314-4e53-93a5-252b69153561\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " Jan 23 14:11:04 crc kubenswrapper[4775]: I0123 14:11:04.921937 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/85b405af-7314-4e53-93a5-252b69153561-trusted-ca\") pod \"85b405af-7314-4e53-93a5-252b69153561\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " Jan 23 14:11:04 crc kubenswrapper[4775]: I0123 14:11:04.921967 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/85b405af-7314-4e53-93a5-252b69153561-bound-sa-token\") pod \"85b405af-7314-4e53-93a5-252b69153561\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " Jan 23 14:11:04 crc kubenswrapper[4775]: I0123 14:11:04.922001 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/85b405af-7314-4e53-93a5-252b69153561-installation-pull-secrets\") pod \"85b405af-7314-4e53-93a5-252b69153561\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " Jan 23 14:11:04 crc kubenswrapper[4775]: I0123 14:11:04.922044 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkptx\" (UniqueName: \"kubernetes.io/projected/85b405af-7314-4e53-93a5-252b69153561-kube-api-access-hkptx\") pod \"85b405af-7314-4e53-93a5-252b69153561\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " Jan 23 14:11:04 crc kubenswrapper[4775]: I0123 14:11:04.922067 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/85b405af-7314-4e53-93a5-252b69153561-ca-trust-extracted\") pod \"85b405af-7314-4e53-93a5-252b69153561\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " Jan 23 14:11:04 crc kubenswrapper[4775]: I0123 14:11:04.922238 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"85b405af-7314-4e53-93a5-252b69153561\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " Jan 23 14:11:04 crc kubenswrapper[4775]: I0123 14:11:04.922259 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/85b405af-7314-4e53-93a5-252b69153561-registry-certificates\") pod \"85b405af-7314-4e53-93a5-252b69153561\" (UID: \"85b405af-7314-4e53-93a5-252b69153561\") " Jan 23 14:11:04 crc kubenswrapper[4775]: I0123 14:11:04.923306 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85b405af-7314-4e53-93a5-252b69153561-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "85b405af-7314-4e53-93a5-252b69153561" (UID: "85b405af-7314-4e53-93a5-252b69153561"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:11:04 crc kubenswrapper[4775]: I0123 14:11:04.923631 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85b405af-7314-4e53-93a5-252b69153561-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "85b405af-7314-4e53-93a5-252b69153561" (UID: "85b405af-7314-4e53-93a5-252b69153561"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:11:04 crc kubenswrapper[4775]: I0123 14:11:04.929188 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85b405af-7314-4e53-93a5-252b69153561-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "85b405af-7314-4e53-93a5-252b69153561" (UID: "85b405af-7314-4e53-93a5-252b69153561"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:11:04 crc kubenswrapper[4775]: I0123 14:11:04.932179 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "85b405af-7314-4e53-93a5-252b69153561" (UID: "85b405af-7314-4e53-93a5-252b69153561"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 23 14:11:04 crc kubenswrapper[4775]: I0123 14:11:04.932996 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85b405af-7314-4e53-93a5-252b69153561-kube-api-access-hkptx" (OuterVolumeSpecName: "kube-api-access-hkptx") pod "85b405af-7314-4e53-93a5-252b69153561" (UID: "85b405af-7314-4e53-93a5-252b69153561"). InnerVolumeSpecName "kube-api-access-hkptx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:11:04 crc kubenswrapper[4775]: I0123 14:11:04.933156 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85b405af-7314-4e53-93a5-252b69153561-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "85b405af-7314-4e53-93a5-252b69153561" (UID: "85b405af-7314-4e53-93a5-252b69153561"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:11:04 crc kubenswrapper[4775]: I0123 14:11:04.933595 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85b405af-7314-4e53-93a5-252b69153561-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "85b405af-7314-4e53-93a5-252b69153561" (UID: "85b405af-7314-4e53-93a5-252b69153561"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:11:04 crc kubenswrapper[4775]: I0123 14:11:04.951061 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85b405af-7314-4e53-93a5-252b69153561-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "85b405af-7314-4e53-93a5-252b69153561" (UID: "85b405af-7314-4e53-93a5-252b69153561"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:11:05 crc kubenswrapper[4775]: I0123 14:11:05.006676 4775 generic.go:334] "Generic (PLEG): container finished" podID="85b405af-7314-4e53-93a5-252b69153561" containerID="4284b5552eca9842bbe2aed75c1f5823dcb142543281afc7abbca3b100b2fc8e" exitCode=0 Jan 23 14:11:05 crc kubenswrapper[4775]: I0123 14:11:05.006754 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" Jan 23 14:11:05 crc kubenswrapper[4775]: I0123 14:11:05.006821 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" event={"ID":"85b405af-7314-4e53-93a5-252b69153561","Type":"ContainerDied","Data":"4284b5552eca9842bbe2aed75c1f5823dcb142543281afc7abbca3b100b2fc8e"} Jan 23 14:11:05 crc kubenswrapper[4775]: I0123 14:11:05.006872 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-xpwjl" event={"ID":"85b405af-7314-4e53-93a5-252b69153561","Type":"ContainerDied","Data":"b50d7a209d2fcc5cb17e88e539bff4914e9d70de68aa4c3a0de07ad93e7848e4"} Jan 23 14:11:05 crc kubenswrapper[4775]: I0123 14:11:05.006895 4775 scope.go:117] "RemoveContainer" containerID="4284b5552eca9842bbe2aed75c1f5823dcb142543281afc7abbca3b100b2fc8e" Jan 23 14:11:05 crc kubenswrapper[4775]: I0123 14:11:05.023463 4775 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/85b405af-7314-4e53-93a5-252b69153561-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:11:05 crc kubenswrapper[4775]: I0123 14:11:05.023484 4775 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/85b405af-7314-4e53-93a5-252b69153561-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 14:11:05 crc kubenswrapper[4775]: I0123 14:11:05.023493 4775 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/85b405af-7314-4e53-93a5-252b69153561-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 23 14:11:05 crc kubenswrapper[4775]: I0123 14:11:05.023502 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkptx\" (UniqueName: \"kubernetes.io/projected/85b405af-7314-4e53-93a5-252b69153561-kube-api-access-hkptx\") on node \"crc\" DevicePath \"\"" Jan 23 14:11:05 crc kubenswrapper[4775]: I0123 14:11:05.024183 4775 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/85b405af-7314-4e53-93a5-252b69153561-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 23 14:11:05 crc kubenswrapper[4775]: I0123 14:11:05.024216 4775 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/85b405af-7314-4e53-93a5-252b69153561-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 23 14:11:05 crc kubenswrapper[4775]: I0123 14:11:05.024226 4775 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/85b405af-7314-4e53-93a5-252b69153561-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 23 14:11:05 crc kubenswrapper[4775]: I0123 14:11:05.035983 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xpwjl"] Jan 23 14:11:05 crc kubenswrapper[4775]: I0123 14:11:05.038263 4775 scope.go:117] "RemoveContainer" containerID="4284b5552eca9842bbe2aed75c1f5823dcb142543281afc7abbca3b100b2fc8e" Jan 23 14:11:05 crc kubenswrapper[4775]: E0123 14:11:05.038725 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4284b5552eca9842bbe2aed75c1f5823dcb142543281afc7abbca3b100b2fc8e\": container with ID starting with 4284b5552eca9842bbe2aed75c1f5823dcb142543281afc7abbca3b100b2fc8e not found: ID does not exist" containerID="4284b5552eca9842bbe2aed75c1f5823dcb142543281afc7abbca3b100b2fc8e" Jan 23 14:11:05 crc kubenswrapper[4775]: I0123 14:11:05.038753 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4284b5552eca9842bbe2aed75c1f5823dcb142543281afc7abbca3b100b2fc8e"} err="failed to get container status \"4284b5552eca9842bbe2aed75c1f5823dcb142543281afc7abbca3b100b2fc8e\": rpc error: code = NotFound desc = could not find container \"4284b5552eca9842bbe2aed75c1f5823dcb142543281afc7abbca3b100b2fc8e\": container with ID starting with 4284b5552eca9842bbe2aed75c1f5823dcb142543281afc7abbca3b100b2fc8e not found: ID does not exist" Jan 23 14:11:05 crc kubenswrapper[4775]: I0123 14:11:05.041363 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-xpwjl"] Jan 23 14:11:05 crc kubenswrapper[4775]: I0123 14:11:05.530333 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-sx4qm" podUID="0c94dee4-8e79-4f60-a8b9-2c1f33490ba7" containerName="registry-server" probeResult="failure" output=< Jan 23 14:11:05 crc kubenswrapper[4775]: timeout: failed to connect service ":50051" within 1s Jan 23 14:11:05 crc kubenswrapper[4775]: > Jan 23 14:11:05 crc kubenswrapper[4775]: I0123 14:11:05.726539 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85b405af-7314-4e53-93a5-252b69153561" path="/var/lib/kubelet/pods/85b405af-7314-4e53-93a5-252b69153561/volumes" Jan 23 14:11:05 crc kubenswrapper[4775]: I0123 14:11:05.903255 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8jjcj" Jan 23 14:11:05 crc kubenswrapper[4775]: I0123 14:11:05.903332 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8jjcj" Jan 23 14:11:05 crc kubenswrapper[4775]: I0123 14:11:05.944655 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8jjcj" Jan 23 14:11:06 crc kubenswrapper[4775]: I0123 14:11:06.049735 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8jjcj" Jan 23 14:11:06 crc kubenswrapper[4775]: I0123 14:11:06.558949 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq"] Jan 23 14:11:06 crc kubenswrapper[4775]: I0123 14:11:06.559140 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" podUID="d7ab4aa6-c476-4952-a259-e1e63a42bb69" containerName="route-controller-manager" containerID="cri-o://781a04fc229c3442a54b74394d8d8073527ad1460a3c3be51f6f7244137482ea" gracePeriod=30 Jan 23 14:11:06 crc kubenswrapper[4775]: I0123 14:11:06.889036 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fxcrw" Jan 23 14:11:06 crc kubenswrapper[4775]: I0123 14:11:06.889100 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fxcrw" Jan 23 14:11:06 crc kubenswrapper[4775]: I0123 14:11:06.929384 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fxcrw" Jan 23 14:11:07 crc kubenswrapper[4775]: I0123 14:11:07.020965 4775 generic.go:334] "Generic (PLEG): container finished" podID="d7ab4aa6-c476-4952-a259-e1e63a42bb69" containerID="781a04fc229c3442a54b74394d8d8073527ad1460a3c3be51f6f7244137482ea" exitCode=0 Jan 23 14:11:07 crc kubenswrapper[4775]: I0123 14:11:07.021857 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" event={"ID":"d7ab4aa6-c476-4952-a259-e1e63a42bb69","Type":"ContainerDied","Data":"781a04fc229c3442a54b74394d8d8073527ad1460a3c3be51f6f7244137482ea"} Jan 23 14:11:07 crc kubenswrapper[4775]: I0123 14:11:07.067693 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fxcrw" Jan 23 14:11:07 crc kubenswrapper[4775]: I0123 14:11:07.451394 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" Jan 23 14:11:07 crc kubenswrapper[4775]: I0123 14:11:07.559528 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7ab4aa6-c476-4952-a259-e1e63a42bb69-client-ca\") pod \"d7ab4aa6-c476-4952-a259-e1e63a42bb69\" (UID: \"d7ab4aa6-c476-4952-a259-e1e63a42bb69\") " Jan 23 14:11:07 crc kubenswrapper[4775]: I0123 14:11:07.559599 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7ab4aa6-c476-4952-a259-e1e63a42bb69-serving-cert\") pod \"d7ab4aa6-c476-4952-a259-e1e63a42bb69\" (UID: \"d7ab4aa6-c476-4952-a259-e1e63a42bb69\") " Jan 23 14:11:07 crc kubenswrapper[4775]: I0123 14:11:07.559720 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dch9g\" (UniqueName: \"kubernetes.io/projected/d7ab4aa6-c476-4952-a259-e1e63a42bb69-kube-api-access-dch9g\") pod \"d7ab4aa6-c476-4952-a259-e1e63a42bb69\" (UID: \"d7ab4aa6-c476-4952-a259-e1e63a42bb69\") " Jan 23 14:11:07 crc kubenswrapper[4775]: I0123 14:11:07.559743 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7ab4aa6-c476-4952-a259-e1e63a42bb69-config\") pod \"d7ab4aa6-c476-4952-a259-e1e63a42bb69\" (UID: \"d7ab4aa6-c476-4952-a259-e1e63a42bb69\") " Jan 23 14:11:07 crc kubenswrapper[4775]: I0123 14:11:07.560873 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7ab4aa6-c476-4952-a259-e1e63a42bb69-config" (OuterVolumeSpecName: "config") pod "d7ab4aa6-c476-4952-a259-e1e63a42bb69" (UID: "d7ab4aa6-c476-4952-a259-e1e63a42bb69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:11:07 crc kubenswrapper[4775]: I0123 14:11:07.561112 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7ab4aa6-c476-4952-a259-e1e63a42bb69-client-ca" (OuterVolumeSpecName: "client-ca") pod "d7ab4aa6-c476-4952-a259-e1e63a42bb69" (UID: "d7ab4aa6-c476-4952-a259-e1e63a42bb69"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:11:07 crc kubenswrapper[4775]: I0123 14:11:07.565493 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7ab4aa6-c476-4952-a259-e1e63a42bb69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7ab4aa6-c476-4952-a259-e1e63a42bb69" (UID: "d7ab4aa6-c476-4952-a259-e1e63a42bb69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:11:07 crc kubenswrapper[4775]: I0123 14:11:07.566175 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7ab4aa6-c476-4952-a259-e1e63a42bb69-kube-api-access-dch9g" (OuterVolumeSpecName: "kube-api-access-dch9g") pod "d7ab4aa6-c476-4952-a259-e1e63a42bb69" (UID: "d7ab4aa6-c476-4952-a259-e1e63a42bb69"). InnerVolumeSpecName "kube-api-access-dch9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:11:07 crc kubenswrapper[4775]: I0123 14:11:07.661161 4775 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7ab4aa6-c476-4952-a259-e1e63a42bb69-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:11:07 crc kubenswrapper[4775]: I0123 14:11:07.661208 4775 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7ab4aa6-c476-4952-a259-e1e63a42bb69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:11:07 crc kubenswrapper[4775]: I0123 14:11:07.661223 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dch9g\" (UniqueName: \"kubernetes.io/projected/d7ab4aa6-c476-4952-a259-e1e63a42bb69-kube-api-access-dch9g\") on node \"crc\" DevicePath \"\"" Jan 23 14:11:07 crc kubenswrapper[4775]: I0123 14:11:07.661237 4775 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7ab4aa6-c476-4952-a259-e1e63a42bb69-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.030794 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.031560 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq" event={"ID":"d7ab4aa6-c476-4952-a259-e1e63a42bb69","Type":"ContainerDied","Data":"0d59494029faa0dc8c83935b2a8d96eb1666ed423d428c52740a79423310818f"} Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.031625 4775 scope.go:117] "RemoveContainer" containerID="781a04fc229c3442a54b74394d8d8073527ad1460a3c3be51f6f7244137482ea" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.056098 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq"] Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.064600 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76946b564d-nl7wq"] Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.124428 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b89f6874d-q9f2t"] Jan 23 14:11:08 crc kubenswrapper[4775]: E0123 14:11:08.124652 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85b405af-7314-4e53-93a5-252b69153561" containerName="registry" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.124666 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="85b405af-7314-4e53-93a5-252b69153561" containerName="registry" Jan 23 14:11:08 crc kubenswrapper[4775]: E0123 14:11:08.124682 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7ab4aa6-c476-4952-a259-e1e63a42bb69" containerName="route-controller-manager" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.124691 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7ab4aa6-c476-4952-a259-e1e63a42bb69" containerName="route-controller-manager" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.124825 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="85b405af-7314-4e53-93a5-252b69153561" containerName="registry" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.124838 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7ab4aa6-c476-4952-a259-e1e63a42bb69" containerName="route-controller-manager" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.125254 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-q9f2t" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.127346 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.127411 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.127571 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.127596 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.127949 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.131045 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.137837 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b89f6874d-q9f2t"] Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.166417 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/551df5d9-a597-45dd-bee6-d189f022e455-config\") pod \"route-controller-manager-5b89f6874d-q9f2t\" (UID: \"551df5d9-a597-45dd-bee6-d189f022e455\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-q9f2t" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.166471 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/551df5d9-a597-45dd-bee6-d189f022e455-client-ca\") pod \"route-controller-manager-5b89f6874d-q9f2t\" (UID: \"551df5d9-a597-45dd-bee6-d189f022e455\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-q9f2t" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.166568 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/551df5d9-a597-45dd-bee6-d189f022e455-serving-cert\") pod \"route-controller-manager-5b89f6874d-q9f2t\" (UID: \"551df5d9-a597-45dd-bee6-d189f022e455\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-q9f2t" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.166660 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7v4k\" (UniqueName: \"kubernetes.io/projected/551df5d9-a597-45dd-bee6-d189f022e455-kube-api-access-c7v4k\") pod \"route-controller-manager-5b89f6874d-q9f2t\" (UID: \"551df5d9-a597-45dd-bee6-d189f022e455\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-q9f2t" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.268072 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7v4k\" (UniqueName: \"kubernetes.io/projected/551df5d9-a597-45dd-bee6-d189f022e455-kube-api-access-c7v4k\") pod \"route-controller-manager-5b89f6874d-q9f2t\" (UID: \"551df5d9-a597-45dd-bee6-d189f022e455\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-q9f2t" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.268170 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/551df5d9-a597-45dd-bee6-d189f022e455-config\") pod \"route-controller-manager-5b89f6874d-q9f2t\" (UID: \"551df5d9-a597-45dd-bee6-d189f022e455\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-q9f2t" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.268211 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/551df5d9-a597-45dd-bee6-d189f022e455-client-ca\") pod \"route-controller-manager-5b89f6874d-q9f2t\" (UID: \"551df5d9-a597-45dd-bee6-d189f022e455\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-q9f2t" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.268303 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/551df5d9-a597-45dd-bee6-d189f022e455-serving-cert\") pod \"route-controller-manager-5b89f6874d-q9f2t\" (UID: \"551df5d9-a597-45dd-bee6-d189f022e455\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-q9f2t" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.269377 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/551df5d9-a597-45dd-bee6-d189f022e455-client-ca\") pod \"route-controller-manager-5b89f6874d-q9f2t\" (UID: \"551df5d9-a597-45dd-bee6-d189f022e455\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-q9f2t" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.269721 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/551df5d9-a597-45dd-bee6-d189f022e455-config\") pod \"route-controller-manager-5b89f6874d-q9f2t\" (UID: \"551df5d9-a597-45dd-bee6-d189f022e455\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-q9f2t" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.284002 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/551df5d9-a597-45dd-bee6-d189f022e455-serving-cert\") pod \"route-controller-manager-5b89f6874d-q9f2t\" (UID: \"551df5d9-a597-45dd-bee6-d189f022e455\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-q9f2t" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.290437 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7v4k\" (UniqueName: \"kubernetes.io/projected/551df5d9-a597-45dd-bee6-d189f022e455-kube-api-access-c7v4k\") pod \"route-controller-manager-5b89f6874d-q9f2t\" (UID: \"551df5d9-a597-45dd-bee6-d189f022e455\") " pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-q9f2t" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.469529 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-q9f2t" Jan 23 14:11:08 crc kubenswrapper[4775]: I0123 14:11:08.924113 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5b89f6874d-q9f2t"] Jan 23 14:11:09 crc kubenswrapper[4775]: I0123 14:11:09.037207 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-q9f2t" event={"ID":"551df5d9-a597-45dd-bee6-d189f022e455","Type":"ContainerStarted","Data":"cb31d93d1fc5e5f6be637dbe5d5d830acc2a27cabe7f04f8d82c7df7921b9df1"} Jan 23 14:11:09 crc kubenswrapper[4775]: I0123 14:11:09.720486 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7ab4aa6-c476-4952-a259-e1e63a42bb69" path="/var/lib/kubelet/pods/d7ab4aa6-c476-4952-a259-e1e63a42bb69/volumes" Jan 23 14:11:10 crc kubenswrapper[4775]: I0123 14:11:10.044417 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-q9f2t" event={"ID":"551df5d9-a597-45dd-bee6-d189f022e455","Type":"ContainerStarted","Data":"247c01734aec8198b78c8d30ee090eb949696c01452950dc2c29c4a7df0b82eb"} Jan 23 14:11:10 crc kubenswrapper[4775]: I0123 14:11:10.044671 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-q9f2t" Jan 23 14:11:10 crc kubenswrapper[4775]: I0123 14:11:10.050852 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-q9f2t" Jan 23 14:11:10 crc kubenswrapper[4775]: I0123 14:11:10.063666 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5b89f6874d-q9f2t" podStartSLOduration=4.06364858 podStartE2EDuration="4.06364858s" podCreationTimestamp="2026-01-23 14:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:11:10.062515677 +0000 UTC m=+417.057344457" watchObservedRunningTime="2026-01-23 14:11:10.06364858 +0000 UTC m=+417.058477320" Jan 23 14:11:14 crc kubenswrapper[4775]: I0123 14:11:14.558571 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sx4qm" Jan 23 14:11:14 crc kubenswrapper[4775]: I0123 14:11:14.599657 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sx4qm" Jan 23 14:12:53 crc kubenswrapper[4775]: I0123 14:12:53.219581 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:12:53 crc kubenswrapper[4775]: I0123 14:12:53.220924 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:13:23 crc kubenswrapper[4775]: I0123 14:13:23.219295 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:13:23 crc kubenswrapper[4775]: I0123 14:13:23.220233 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:13:53 crc kubenswrapper[4775]: I0123 14:13:53.219922 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:13:53 crc kubenswrapper[4775]: I0123 14:13:53.220554 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:13:53 crc kubenswrapper[4775]: I0123 14:13:53.220621 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:13:53 crc kubenswrapper[4775]: I0123 14:13:53.221586 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3a30391cad6397529420dfc5378ada691294f3663e7d36abc04ee2debc01dfeb"} pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 14:13:53 crc kubenswrapper[4775]: I0123 14:13:53.221691 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" containerID="cri-o://3a30391cad6397529420dfc5378ada691294f3663e7d36abc04ee2debc01dfeb" gracePeriod=600 Jan 23 14:13:54 crc kubenswrapper[4775]: I0123 14:13:54.116295 4775 generic.go:334] "Generic (PLEG): container finished" podID="4fea0767-0566-4214-855d-ed0373946271" containerID="3a30391cad6397529420dfc5378ada691294f3663e7d36abc04ee2debc01dfeb" exitCode=0 Jan 23 14:13:54 crc kubenswrapper[4775]: I0123 14:13:54.116406 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerDied","Data":"3a30391cad6397529420dfc5378ada691294f3663e7d36abc04ee2debc01dfeb"} Jan 23 14:13:54 crc kubenswrapper[4775]: I0123 14:13:54.117111 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerStarted","Data":"815b4a32200fdfae17b328752ad92ad8ee14e4c70962ef6a5caef5715b1e0d13"} Jan 23 14:13:54 crc kubenswrapper[4775]: I0123 14:13:54.117149 4775 scope.go:117] "RemoveContainer" containerID="64681a72387a3235a4c6d3370b32de4e57c80d8102b47cdde5e10511ccb7381b" Jan 23 14:15:00 crc kubenswrapper[4775]: I0123 14:15:00.225780 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486295-frwlc"] Jan 23 14:15:00 crc kubenswrapper[4775]: I0123 14:15:00.228048 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-frwlc" Jan 23 14:15:00 crc kubenswrapper[4775]: I0123 14:15:00.235724 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486295-frwlc"] Jan 23 14:15:00 crc kubenswrapper[4775]: I0123 14:15:00.236272 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 14:15:00 crc kubenswrapper[4775]: I0123 14:15:00.236582 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 14:15:00 crc kubenswrapper[4775]: I0123 14:15:00.418313 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c37bf395-5578-4ad9-b210-8dd70a3e7d7a-secret-volume\") pod \"collect-profiles-29486295-frwlc\" (UID: \"c37bf395-5578-4ad9-b210-8dd70a3e7d7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-frwlc" Jan 23 14:15:00 crc kubenswrapper[4775]: I0123 14:15:00.418784 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c37bf395-5578-4ad9-b210-8dd70a3e7d7a-config-volume\") pod \"collect-profiles-29486295-frwlc\" (UID: \"c37bf395-5578-4ad9-b210-8dd70a3e7d7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-frwlc" Jan 23 14:15:00 crc kubenswrapper[4775]: I0123 14:15:00.418919 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jdf7\" (UniqueName: \"kubernetes.io/projected/c37bf395-5578-4ad9-b210-8dd70a3e7d7a-kube-api-access-8jdf7\") pod \"collect-profiles-29486295-frwlc\" (UID: \"c37bf395-5578-4ad9-b210-8dd70a3e7d7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-frwlc" Jan 23 14:15:00 crc kubenswrapper[4775]: I0123 14:15:00.520762 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jdf7\" (UniqueName: \"kubernetes.io/projected/c37bf395-5578-4ad9-b210-8dd70a3e7d7a-kube-api-access-8jdf7\") pod \"collect-profiles-29486295-frwlc\" (UID: \"c37bf395-5578-4ad9-b210-8dd70a3e7d7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-frwlc" Jan 23 14:15:00 crc kubenswrapper[4775]: I0123 14:15:00.520894 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c37bf395-5578-4ad9-b210-8dd70a3e7d7a-secret-volume\") pod \"collect-profiles-29486295-frwlc\" (UID: \"c37bf395-5578-4ad9-b210-8dd70a3e7d7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-frwlc" Jan 23 14:15:00 crc kubenswrapper[4775]: I0123 14:15:00.520963 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c37bf395-5578-4ad9-b210-8dd70a3e7d7a-config-volume\") pod \"collect-profiles-29486295-frwlc\" (UID: \"c37bf395-5578-4ad9-b210-8dd70a3e7d7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-frwlc" Jan 23 14:15:00 crc kubenswrapper[4775]: I0123 14:15:00.522759 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c37bf395-5578-4ad9-b210-8dd70a3e7d7a-config-volume\") pod \"collect-profiles-29486295-frwlc\" (UID: \"c37bf395-5578-4ad9-b210-8dd70a3e7d7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-frwlc" Jan 23 14:15:00 crc kubenswrapper[4775]: I0123 14:15:00.531778 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c37bf395-5578-4ad9-b210-8dd70a3e7d7a-secret-volume\") pod \"collect-profiles-29486295-frwlc\" (UID: \"c37bf395-5578-4ad9-b210-8dd70a3e7d7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-frwlc" Jan 23 14:15:00 crc kubenswrapper[4775]: I0123 14:15:00.551960 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jdf7\" (UniqueName: \"kubernetes.io/projected/c37bf395-5578-4ad9-b210-8dd70a3e7d7a-kube-api-access-8jdf7\") pod \"collect-profiles-29486295-frwlc\" (UID: \"c37bf395-5578-4ad9-b210-8dd70a3e7d7a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-frwlc" Jan 23 14:15:00 crc kubenswrapper[4775]: I0123 14:15:00.575316 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-frwlc" Jan 23 14:15:01 crc kubenswrapper[4775]: I0123 14:15:01.036580 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486295-frwlc"] Jan 23 14:15:01 crc kubenswrapper[4775]: I0123 14:15:01.564270 4775 generic.go:334] "Generic (PLEG): container finished" podID="c37bf395-5578-4ad9-b210-8dd70a3e7d7a" containerID="15a8d47f089ab5d6d8e17473d5ab659be4c60d627bd808897d5ab6a5904a76cc" exitCode=0 Jan 23 14:15:01 crc kubenswrapper[4775]: I0123 14:15:01.564360 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-frwlc" event={"ID":"c37bf395-5578-4ad9-b210-8dd70a3e7d7a","Type":"ContainerDied","Data":"15a8d47f089ab5d6d8e17473d5ab659be4c60d627bd808897d5ab6a5904a76cc"} Jan 23 14:15:01 crc kubenswrapper[4775]: I0123 14:15:01.564410 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-frwlc" event={"ID":"c37bf395-5578-4ad9-b210-8dd70a3e7d7a","Type":"ContainerStarted","Data":"b78398132297fd0084b762a794afb4b03cdb4aa115c5f65a549ba55cd1bb09a2"} Jan 23 14:15:02 crc kubenswrapper[4775]: I0123 14:15:02.891583 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-frwlc" Jan 23 14:15:03 crc kubenswrapper[4775]: I0123 14:15:03.060465 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c37bf395-5578-4ad9-b210-8dd70a3e7d7a-config-volume\") pod \"c37bf395-5578-4ad9-b210-8dd70a3e7d7a\" (UID: \"c37bf395-5578-4ad9-b210-8dd70a3e7d7a\") " Jan 23 14:15:03 crc kubenswrapper[4775]: I0123 14:15:03.060938 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jdf7\" (UniqueName: \"kubernetes.io/projected/c37bf395-5578-4ad9-b210-8dd70a3e7d7a-kube-api-access-8jdf7\") pod \"c37bf395-5578-4ad9-b210-8dd70a3e7d7a\" (UID: \"c37bf395-5578-4ad9-b210-8dd70a3e7d7a\") " Jan 23 14:15:03 crc kubenswrapper[4775]: I0123 14:15:03.061001 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c37bf395-5578-4ad9-b210-8dd70a3e7d7a-secret-volume\") pod \"c37bf395-5578-4ad9-b210-8dd70a3e7d7a\" (UID: \"c37bf395-5578-4ad9-b210-8dd70a3e7d7a\") " Jan 23 14:15:03 crc kubenswrapper[4775]: I0123 14:15:03.061437 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c37bf395-5578-4ad9-b210-8dd70a3e7d7a-config-volume" (OuterVolumeSpecName: "config-volume") pod "c37bf395-5578-4ad9-b210-8dd70a3e7d7a" (UID: "c37bf395-5578-4ad9-b210-8dd70a3e7d7a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:15:03 crc kubenswrapper[4775]: I0123 14:15:03.067500 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c37bf395-5578-4ad9-b210-8dd70a3e7d7a-kube-api-access-8jdf7" (OuterVolumeSpecName: "kube-api-access-8jdf7") pod "c37bf395-5578-4ad9-b210-8dd70a3e7d7a" (UID: "c37bf395-5578-4ad9-b210-8dd70a3e7d7a"). InnerVolumeSpecName "kube-api-access-8jdf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:15:03 crc kubenswrapper[4775]: I0123 14:15:03.068528 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c37bf395-5578-4ad9-b210-8dd70a3e7d7a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c37bf395-5578-4ad9-b210-8dd70a3e7d7a" (UID: "c37bf395-5578-4ad9-b210-8dd70a3e7d7a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:15:03 crc kubenswrapper[4775]: I0123 14:15:03.162888 4775 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c37bf395-5578-4ad9-b210-8dd70a3e7d7a-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 14:15:03 crc kubenswrapper[4775]: I0123 14:15:03.162924 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jdf7\" (UniqueName: \"kubernetes.io/projected/c37bf395-5578-4ad9-b210-8dd70a3e7d7a-kube-api-access-8jdf7\") on node \"crc\" DevicePath \"\"" Jan 23 14:15:03 crc kubenswrapper[4775]: I0123 14:15:03.162937 4775 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c37bf395-5578-4ad9-b210-8dd70a3e7d7a-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 14:15:03 crc kubenswrapper[4775]: I0123 14:15:03.579530 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-frwlc" event={"ID":"c37bf395-5578-4ad9-b210-8dd70a3e7d7a","Type":"ContainerDied","Data":"b78398132297fd0084b762a794afb4b03cdb4aa115c5f65a549ba55cd1bb09a2"} Jan 23 14:15:03 crc kubenswrapper[4775]: I0123 14:15:03.579576 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b78398132297fd0084b762a794afb4b03cdb4aa115c5f65a549ba55cd1bb09a2" Jan 23 14:15:03 crc kubenswrapper[4775]: I0123 14:15:03.579623 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486295-frwlc" Jan 23 14:15:53 crc kubenswrapper[4775]: I0123 14:15:53.219557 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:15:53 crc kubenswrapper[4775]: I0123 14:15:53.220479 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:16:23 crc kubenswrapper[4775]: I0123 14:16:23.220228 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:16:23 crc kubenswrapper[4775]: I0123 14:16:23.221387 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:16:53 crc kubenswrapper[4775]: I0123 14:16:53.219571 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:16:53 crc kubenswrapper[4775]: I0123 14:16:53.220922 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:16:53 crc kubenswrapper[4775]: I0123 14:16:53.221791 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:16:53 crc kubenswrapper[4775]: I0123 14:16:53.224669 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"815b4a32200fdfae17b328752ad92ad8ee14e4c70962ef6a5caef5715b1e0d13"} pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 14:16:53 crc kubenswrapper[4775]: I0123 14:16:53.224832 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" containerID="cri-o://815b4a32200fdfae17b328752ad92ad8ee14e4c70962ef6a5caef5715b1e0d13" gracePeriod=600 Jan 23 14:16:54 crc kubenswrapper[4775]: I0123 14:16:54.330627 4775 generic.go:334] "Generic (PLEG): container finished" podID="4fea0767-0566-4214-855d-ed0373946271" containerID="815b4a32200fdfae17b328752ad92ad8ee14e4c70962ef6a5caef5715b1e0d13" exitCode=0 Jan 23 14:16:54 crc kubenswrapper[4775]: I0123 14:16:54.330730 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerDied","Data":"815b4a32200fdfae17b328752ad92ad8ee14e4c70962ef6a5caef5715b1e0d13"} Jan 23 14:16:54 crc kubenswrapper[4775]: I0123 14:16:54.331288 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerStarted","Data":"fa8fa956c376098d850acaf12f40cfec6f35655328fae4e2ad440d4fb20e4881"} Jan 23 14:16:54 crc kubenswrapper[4775]: I0123 14:16:54.331326 4775 scope.go:117] "RemoveContainer" containerID="3a30391cad6397529420dfc5378ada691294f3663e7d36abc04ee2debc01dfeb" Jan 23 14:16:55 crc kubenswrapper[4775]: I0123 14:16:55.233680 4775 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 23 14:17:29 crc kubenswrapper[4775]: I0123 14:17:29.459383 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll"] Jan 23 14:17:29 crc kubenswrapper[4775]: E0123 14:17:29.460374 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c37bf395-5578-4ad9-b210-8dd70a3e7d7a" containerName="collect-profiles" Jan 23 14:17:29 crc kubenswrapper[4775]: I0123 14:17:29.460392 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="c37bf395-5578-4ad9-b210-8dd70a3e7d7a" containerName="collect-profiles" Jan 23 14:17:29 crc kubenswrapper[4775]: I0123 14:17:29.460497 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="c37bf395-5578-4ad9-b210-8dd70a3e7d7a" containerName="collect-profiles" Jan 23 14:17:29 crc kubenswrapper[4775]: I0123 14:17:29.461260 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll" Jan 23 14:17:29 crc kubenswrapper[4775]: I0123 14:17:29.463130 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 23 14:17:29 crc kubenswrapper[4775]: I0123 14:17:29.474482 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll"] Jan 23 14:17:29 crc kubenswrapper[4775]: I0123 14:17:29.576082 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d4d873a3-d698-439c-a1de-c9a7fc9e1e6d-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll\" (UID: \"d4d873a3-d698-439c-a1de-c9a7fc9e1e6d\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll" Jan 23 14:17:29 crc kubenswrapper[4775]: I0123 14:17:29.576567 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d4d873a3-d698-439c-a1de-c9a7fc9e1e6d-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll\" (UID: \"d4d873a3-d698-439c-a1de-c9a7fc9e1e6d\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll" Jan 23 14:17:29 crc kubenswrapper[4775]: I0123 14:17:29.576623 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7x8s\" (UniqueName: \"kubernetes.io/projected/d4d873a3-d698-439c-a1de-c9a7fc9e1e6d-kube-api-access-j7x8s\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll\" (UID: \"d4d873a3-d698-439c-a1de-c9a7fc9e1e6d\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll" Jan 23 14:17:29 crc kubenswrapper[4775]: I0123 14:17:29.678310 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d4d873a3-d698-439c-a1de-c9a7fc9e1e6d-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll\" (UID: \"d4d873a3-d698-439c-a1de-c9a7fc9e1e6d\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll" Jan 23 14:17:29 crc kubenswrapper[4775]: I0123 14:17:29.678393 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d4d873a3-d698-439c-a1de-c9a7fc9e1e6d-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll\" (UID: \"d4d873a3-d698-439c-a1de-c9a7fc9e1e6d\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll" Jan 23 14:17:29 crc kubenswrapper[4775]: I0123 14:17:29.678430 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7x8s\" (UniqueName: \"kubernetes.io/projected/d4d873a3-d698-439c-a1de-c9a7fc9e1e6d-kube-api-access-j7x8s\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll\" (UID: \"d4d873a3-d698-439c-a1de-c9a7fc9e1e6d\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll" Jan 23 14:17:29 crc kubenswrapper[4775]: I0123 14:17:29.679574 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d4d873a3-d698-439c-a1de-c9a7fc9e1e6d-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll\" (UID: \"d4d873a3-d698-439c-a1de-c9a7fc9e1e6d\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll" Jan 23 14:17:29 crc kubenswrapper[4775]: I0123 14:17:29.682198 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d4d873a3-d698-439c-a1de-c9a7fc9e1e6d-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll\" (UID: \"d4d873a3-d698-439c-a1de-c9a7fc9e1e6d\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll" Jan 23 14:17:29 crc kubenswrapper[4775]: I0123 14:17:29.711032 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7x8s\" (UniqueName: \"kubernetes.io/projected/d4d873a3-d698-439c-a1de-c9a7fc9e1e6d-kube-api-access-j7x8s\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll\" (UID: \"d4d873a3-d698-439c-a1de-c9a7fc9e1e6d\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll" Jan 23 14:17:29 crc kubenswrapper[4775]: I0123 14:17:29.775610 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll" Jan 23 14:17:29 crc kubenswrapper[4775]: I0123 14:17:29.973470 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll"] Jan 23 14:17:30 crc kubenswrapper[4775]: I0123 14:17:30.570865 4775 generic.go:334] "Generic (PLEG): container finished" podID="d4d873a3-d698-439c-a1de-c9a7fc9e1e6d" containerID="aeec217b1013090323ad7b543d66684166ebbe2f392de79d83faa5baea26b0a7" exitCode=0 Jan 23 14:17:30 crc kubenswrapper[4775]: I0123 14:17:30.570930 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll" event={"ID":"d4d873a3-d698-439c-a1de-c9a7fc9e1e6d","Type":"ContainerDied","Data":"aeec217b1013090323ad7b543d66684166ebbe2f392de79d83faa5baea26b0a7"} Jan 23 14:17:30 crc kubenswrapper[4775]: I0123 14:17:30.570968 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll" event={"ID":"d4d873a3-d698-439c-a1de-c9a7fc9e1e6d","Type":"ContainerStarted","Data":"0bc1d466cfd8bb22da2d97e26f7719a28ede043102dedccaeaca69546f856582"} Jan 23 14:17:30 crc kubenswrapper[4775]: I0123 14:17:30.573207 4775 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 14:17:31 crc kubenswrapper[4775]: I0123 14:17:31.550107 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lflht"] Jan 23 14:17:31 crc kubenswrapper[4775]: I0123 14:17:31.552370 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lflht" Jan 23 14:17:31 crc kubenswrapper[4775]: I0123 14:17:31.568933 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lflht"] Jan 23 14:17:31 crc kubenswrapper[4775]: I0123 14:17:31.708557 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/748a9ff6-4b80-40f9-ae41-37bc66c272f6-utilities\") pod \"redhat-operators-lflht\" (UID: \"748a9ff6-4b80-40f9-ae41-37bc66c272f6\") " pod="openshift-marketplace/redhat-operators-lflht" Jan 23 14:17:31 crc kubenswrapper[4775]: I0123 14:17:31.708892 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/748a9ff6-4b80-40f9-ae41-37bc66c272f6-catalog-content\") pod \"redhat-operators-lflht\" (UID: \"748a9ff6-4b80-40f9-ae41-37bc66c272f6\") " pod="openshift-marketplace/redhat-operators-lflht" Jan 23 14:17:31 crc kubenswrapper[4775]: I0123 14:17:31.708949 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gpcj\" (UniqueName: \"kubernetes.io/projected/748a9ff6-4b80-40f9-ae41-37bc66c272f6-kube-api-access-4gpcj\") pod \"redhat-operators-lflht\" (UID: \"748a9ff6-4b80-40f9-ae41-37bc66c272f6\") " pod="openshift-marketplace/redhat-operators-lflht" Jan 23 14:17:31 crc kubenswrapper[4775]: I0123 14:17:31.809956 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/748a9ff6-4b80-40f9-ae41-37bc66c272f6-catalog-content\") pod \"redhat-operators-lflht\" (UID: \"748a9ff6-4b80-40f9-ae41-37bc66c272f6\") " pod="openshift-marketplace/redhat-operators-lflht" Jan 23 14:17:31 crc kubenswrapper[4775]: I0123 14:17:31.810007 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gpcj\" (UniqueName: \"kubernetes.io/projected/748a9ff6-4b80-40f9-ae41-37bc66c272f6-kube-api-access-4gpcj\") pod \"redhat-operators-lflht\" (UID: \"748a9ff6-4b80-40f9-ae41-37bc66c272f6\") " pod="openshift-marketplace/redhat-operators-lflht" Jan 23 14:17:31 crc kubenswrapper[4775]: I0123 14:17:31.810054 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/748a9ff6-4b80-40f9-ae41-37bc66c272f6-utilities\") pod \"redhat-operators-lflht\" (UID: \"748a9ff6-4b80-40f9-ae41-37bc66c272f6\") " pod="openshift-marketplace/redhat-operators-lflht" Jan 23 14:17:31 crc kubenswrapper[4775]: I0123 14:17:31.810717 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/748a9ff6-4b80-40f9-ae41-37bc66c272f6-utilities\") pod \"redhat-operators-lflht\" (UID: \"748a9ff6-4b80-40f9-ae41-37bc66c272f6\") " pod="openshift-marketplace/redhat-operators-lflht" Jan 23 14:17:31 crc kubenswrapper[4775]: I0123 14:17:31.811047 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/748a9ff6-4b80-40f9-ae41-37bc66c272f6-catalog-content\") pod \"redhat-operators-lflht\" (UID: \"748a9ff6-4b80-40f9-ae41-37bc66c272f6\") " pod="openshift-marketplace/redhat-operators-lflht" Jan 23 14:17:31 crc kubenswrapper[4775]: I0123 14:17:31.842080 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gpcj\" (UniqueName: \"kubernetes.io/projected/748a9ff6-4b80-40f9-ae41-37bc66c272f6-kube-api-access-4gpcj\") pod \"redhat-operators-lflht\" (UID: \"748a9ff6-4b80-40f9-ae41-37bc66c272f6\") " pod="openshift-marketplace/redhat-operators-lflht" Jan 23 14:17:31 crc kubenswrapper[4775]: I0123 14:17:31.898781 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lflht" Jan 23 14:17:32 crc kubenswrapper[4775]: I0123 14:17:32.093420 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lflht"] Jan 23 14:17:32 crc kubenswrapper[4775]: W0123 14:17:32.097335 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod748a9ff6_4b80_40f9_ae41_37bc66c272f6.slice/crio-93c922f1487ddf500d1f9351c5caa4eedc2618e3851fee725cf2afd7fd0be358 WatchSource:0}: Error finding container 93c922f1487ddf500d1f9351c5caa4eedc2618e3851fee725cf2afd7fd0be358: Status 404 returned error can't find the container with id 93c922f1487ddf500d1f9351c5caa4eedc2618e3851fee725cf2afd7fd0be358 Jan 23 14:17:32 crc kubenswrapper[4775]: I0123 14:17:32.584293 4775 generic.go:334] "Generic (PLEG): container finished" podID="d4d873a3-d698-439c-a1de-c9a7fc9e1e6d" containerID="dea1dd28a2731b8f977fbd369d244552810d648501576d68e727344cd0d1e33e" exitCode=0 Jan 23 14:17:32 crc kubenswrapper[4775]: I0123 14:17:32.584373 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll" event={"ID":"d4d873a3-d698-439c-a1de-c9a7fc9e1e6d","Type":"ContainerDied","Data":"dea1dd28a2731b8f977fbd369d244552810d648501576d68e727344cd0d1e33e"} Jan 23 14:17:32 crc kubenswrapper[4775]: I0123 14:17:32.587170 4775 generic.go:334] "Generic (PLEG): container finished" podID="748a9ff6-4b80-40f9-ae41-37bc66c272f6" containerID="f97a99a72e6da74778e3548426a45903a3d520396f1383be0c6443f902f8596a" exitCode=0 Jan 23 14:17:32 crc kubenswrapper[4775]: I0123 14:17:32.587213 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lflht" event={"ID":"748a9ff6-4b80-40f9-ae41-37bc66c272f6","Type":"ContainerDied","Data":"f97a99a72e6da74778e3548426a45903a3d520396f1383be0c6443f902f8596a"} Jan 23 14:17:32 crc kubenswrapper[4775]: I0123 14:17:32.587238 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lflht" event={"ID":"748a9ff6-4b80-40f9-ae41-37bc66c272f6","Type":"ContainerStarted","Data":"93c922f1487ddf500d1f9351c5caa4eedc2618e3851fee725cf2afd7fd0be358"} Jan 23 14:17:33 crc kubenswrapper[4775]: I0123 14:17:33.596053 4775 generic.go:334] "Generic (PLEG): container finished" podID="d4d873a3-d698-439c-a1de-c9a7fc9e1e6d" containerID="ec4a2ae89c81b2c9040c94e5b02d11291b835380df3a78ae667e5991cc2029bc" exitCode=0 Jan 23 14:17:33 crc kubenswrapper[4775]: I0123 14:17:33.596362 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll" event={"ID":"d4d873a3-d698-439c-a1de-c9a7fc9e1e6d","Type":"ContainerDied","Data":"ec4a2ae89c81b2c9040c94e5b02d11291b835380df3a78ae667e5991cc2029bc"} Jan 23 14:17:34 crc kubenswrapper[4775]: I0123 14:17:34.605246 4775 generic.go:334] "Generic (PLEG): container finished" podID="748a9ff6-4b80-40f9-ae41-37bc66c272f6" containerID="61ce9ba1643c99fc37fc14a63747755de7afc6a9d3819f1c9a37d622b4cf7f7f" exitCode=0 Jan 23 14:17:34 crc kubenswrapper[4775]: I0123 14:17:34.605374 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lflht" event={"ID":"748a9ff6-4b80-40f9-ae41-37bc66c272f6","Type":"ContainerDied","Data":"61ce9ba1643c99fc37fc14a63747755de7afc6a9d3819f1c9a37d622b4cf7f7f"} Jan 23 14:17:34 crc kubenswrapper[4775]: I0123 14:17:34.858980 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll" Jan 23 14:17:35 crc kubenswrapper[4775]: I0123 14:17:35.055373 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d4d873a3-d698-439c-a1de-c9a7fc9e1e6d-util\") pod \"d4d873a3-d698-439c-a1de-c9a7fc9e1e6d\" (UID: \"d4d873a3-d698-439c-a1de-c9a7fc9e1e6d\") " Jan 23 14:17:35 crc kubenswrapper[4775]: I0123 14:17:35.055478 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d4d873a3-d698-439c-a1de-c9a7fc9e1e6d-bundle\") pod \"d4d873a3-d698-439c-a1de-c9a7fc9e1e6d\" (UID: \"d4d873a3-d698-439c-a1de-c9a7fc9e1e6d\") " Jan 23 14:17:35 crc kubenswrapper[4775]: I0123 14:17:35.055580 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7x8s\" (UniqueName: \"kubernetes.io/projected/d4d873a3-d698-439c-a1de-c9a7fc9e1e6d-kube-api-access-j7x8s\") pod \"d4d873a3-d698-439c-a1de-c9a7fc9e1e6d\" (UID: \"d4d873a3-d698-439c-a1de-c9a7fc9e1e6d\") " Jan 23 14:17:35 crc kubenswrapper[4775]: I0123 14:17:35.057332 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4d873a3-d698-439c-a1de-c9a7fc9e1e6d-bundle" (OuterVolumeSpecName: "bundle") pod "d4d873a3-d698-439c-a1de-c9a7fc9e1e6d" (UID: "d4d873a3-d698-439c-a1de-c9a7fc9e1e6d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:17:35 crc kubenswrapper[4775]: I0123 14:17:35.065114 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4d873a3-d698-439c-a1de-c9a7fc9e1e6d-kube-api-access-j7x8s" (OuterVolumeSpecName: "kube-api-access-j7x8s") pod "d4d873a3-d698-439c-a1de-c9a7fc9e1e6d" (UID: "d4d873a3-d698-439c-a1de-c9a7fc9e1e6d"). InnerVolumeSpecName "kube-api-access-j7x8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:17:35 crc kubenswrapper[4775]: I0123 14:17:35.069927 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4d873a3-d698-439c-a1de-c9a7fc9e1e6d-util" (OuterVolumeSpecName: "util") pod "d4d873a3-d698-439c-a1de-c9a7fc9e1e6d" (UID: "d4d873a3-d698-439c-a1de-c9a7fc9e1e6d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:17:35 crc kubenswrapper[4775]: I0123 14:17:35.157750 4775 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d4d873a3-d698-439c-a1de-c9a7fc9e1e6d-util\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:35 crc kubenswrapper[4775]: I0123 14:17:35.157881 4775 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d4d873a3-d698-439c-a1de-c9a7fc9e1e6d-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:35 crc kubenswrapper[4775]: I0123 14:17:35.157901 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7x8s\" (UniqueName: \"kubernetes.io/projected/d4d873a3-d698-439c-a1de-c9a7fc9e1e6d-kube-api-access-j7x8s\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:35 crc kubenswrapper[4775]: I0123 14:17:35.613716 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lflht" event={"ID":"748a9ff6-4b80-40f9-ae41-37bc66c272f6","Type":"ContainerStarted","Data":"f06d4f6767a81a7749fa41e7dfaa09c6b4cb54aa8866d79a23981a879ae6dde5"} Jan 23 14:17:35 crc kubenswrapper[4775]: I0123 14:17:35.616108 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll" event={"ID":"d4d873a3-d698-439c-a1de-c9a7fc9e1e6d","Type":"ContainerDied","Data":"0bc1d466cfd8bb22da2d97e26f7719a28ede043102dedccaeaca69546f856582"} Jan 23 14:17:35 crc kubenswrapper[4775]: I0123 14:17:35.616150 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll" Jan 23 14:17:35 crc kubenswrapper[4775]: I0123 14:17:35.616151 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bc1d466cfd8bb22da2d97e26f7719a28ede043102dedccaeaca69546f856582" Jan 23 14:17:35 crc kubenswrapper[4775]: I0123 14:17:35.642371 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lflht" podStartSLOduration=2.130203808 podStartE2EDuration="4.642351755s" podCreationTimestamp="2026-01-23 14:17:31 +0000 UTC" firstStartedPulling="2026-01-23 14:17:32.588351788 +0000 UTC m=+799.583180548" lastFinishedPulling="2026-01-23 14:17:35.100499745 +0000 UTC m=+802.095328495" observedRunningTime="2026-01-23 14:17:35.640986166 +0000 UTC m=+802.635814946" watchObservedRunningTime="2026-01-23 14:17:35.642351755 +0000 UTC m=+802.637180495" Jan 23 14:17:37 crc kubenswrapper[4775]: I0123 14:17:37.066207 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-gq778"] Jan 23 14:17:37 crc kubenswrapper[4775]: E0123 14:17:37.066445 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4d873a3-d698-439c-a1de-c9a7fc9e1e6d" containerName="util" Jan 23 14:17:37 crc kubenswrapper[4775]: I0123 14:17:37.066461 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4d873a3-d698-439c-a1de-c9a7fc9e1e6d" containerName="util" Jan 23 14:17:37 crc kubenswrapper[4775]: E0123 14:17:37.066476 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4d873a3-d698-439c-a1de-c9a7fc9e1e6d" containerName="extract" Jan 23 14:17:37 crc kubenswrapper[4775]: I0123 14:17:37.066482 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4d873a3-d698-439c-a1de-c9a7fc9e1e6d" containerName="extract" Jan 23 14:17:37 crc kubenswrapper[4775]: E0123 14:17:37.066498 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4d873a3-d698-439c-a1de-c9a7fc9e1e6d" containerName="pull" Jan 23 14:17:37 crc kubenswrapper[4775]: I0123 14:17:37.066505 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4d873a3-d698-439c-a1de-c9a7fc9e1e6d" containerName="pull" Jan 23 14:17:37 crc kubenswrapper[4775]: I0123 14:17:37.066620 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4d873a3-d698-439c-a1de-c9a7fc9e1e6d" containerName="extract" Jan 23 14:17:37 crc kubenswrapper[4775]: I0123 14:17:37.067067 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-gq778" Jan 23 14:17:37 crc kubenswrapper[4775]: I0123 14:17:37.069200 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 23 14:17:37 crc kubenswrapper[4775]: I0123 14:17:37.069235 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 23 14:17:37 crc kubenswrapper[4775]: I0123 14:17:37.069270 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-9jjmk" Jan 23 14:17:37 crc kubenswrapper[4775]: I0123 14:17:37.076588 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-gq778"] Jan 23 14:17:37 crc kubenswrapper[4775]: I0123 14:17:37.190733 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw8j5\" (UniqueName: \"kubernetes.io/projected/ebe0482d-2988-4f4d-929f-4c2980e19cf3-kube-api-access-tw8j5\") pod \"nmstate-operator-646758c888-gq778\" (UID: \"ebe0482d-2988-4f4d-929f-4c2980e19cf3\") " pod="openshift-nmstate/nmstate-operator-646758c888-gq778" Jan 23 14:17:37 crc kubenswrapper[4775]: I0123 14:17:37.291973 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw8j5\" (UniqueName: \"kubernetes.io/projected/ebe0482d-2988-4f4d-929f-4c2980e19cf3-kube-api-access-tw8j5\") pod \"nmstate-operator-646758c888-gq778\" (UID: \"ebe0482d-2988-4f4d-929f-4c2980e19cf3\") " pod="openshift-nmstate/nmstate-operator-646758c888-gq778" Jan 23 14:17:37 crc kubenswrapper[4775]: I0123 14:17:37.311190 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw8j5\" (UniqueName: \"kubernetes.io/projected/ebe0482d-2988-4f4d-929f-4c2980e19cf3-kube-api-access-tw8j5\") pod \"nmstate-operator-646758c888-gq778\" (UID: \"ebe0482d-2988-4f4d-929f-4c2980e19cf3\") " pod="openshift-nmstate/nmstate-operator-646758c888-gq778" Jan 23 14:17:37 crc kubenswrapper[4775]: I0123 14:17:37.402392 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-gq778" Jan 23 14:17:37 crc kubenswrapper[4775]: I0123 14:17:37.604573 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-gq778"] Jan 23 14:17:37 crc kubenswrapper[4775]: I0123 14:17:37.626145 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-gq778" event={"ID":"ebe0482d-2988-4f4d-929f-4c2980e19cf3","Type":"ContainerStarted","Data":"81e80e3ac5a5a67f5e8c1f2e60f5c610745d3f01670b445fa53431eaec080877"} Jan 23 14:17:38 crc kubenswrapper[4775]: I0123 14:17:38.859352 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qrvs8"] Jan 23 14:17:38 crc kubenswrapper[4775]: I0123 14:17:38.859779 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovn-controller" containerID="cri-o://1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a" gracePeriod=30 Jan 23 14:17:38 crc kubenswrapper[4775]: I0123 14:17:38.859888 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="nbdb" containerID="cri-o://dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c" gracePeriod=30 Jan 23 14:17:38 crc kubenswrapper[4775]: I0123 14:17:38.859941 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovn-acl-logging" containerID="cri-o://209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028" gracePeriod=30 Jan 23 14:17:38 crc kubenswrapper[4775]: I0123 14:17:38.859917 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="northd" containerID="cri-o://a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316" gracePeriod=30 Jan 23 14:17:38 crc kubenswrapper[4775]: I0123 14:17:38.860056 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="kube-rbac-proxy-node" containerID="cri-o://8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6" gracePeriod=30 Jan 23 14:17:38 crc kubenswrapper[4775]: I0123 14:17:38.860083 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="sbdb" containerID="cri-o://1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c" gracePeriod=30 Jan 23 14:17:38 crc kubenswrapper[4775]: I0123 14:17:38.859878 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14" gracePeriod=30 Jan 23 14:17:38 crc kubenswrapper[4775]: I0123 14:17:38.907345 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovnkube-controller" containerID="cri-o://9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481" gracePeriod=30 Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.150754 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qrvs8_bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06/ovnkube-controller/3.log" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.153266 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qrvs8_bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06/ovn-acl-logging/0.log" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.153767 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qrvs8_bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06/ovn-controller/0.log" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.154352 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.204860 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vdg25"] Jan 23 14:17:39 crc kubenswrapper[4775]: E0123 14:17:39.205058 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="nbdb" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205069 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="nbdb" Jan 23 14:17:39 crc kubenswrapper[4775]: E0123 14:17:39.205080 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovnkube-controller" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205087 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovnkube-controller" Jan 23 14:17:39 crc kubenswrapper[4775]: E0123 14:17:39.205093 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovn-controller" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205099 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovn-controller" Jan 23 14:17:39 crc kubenswrapper[4775]: E0123 14:17:39.205107 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="sbdb" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205112 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="sbdb" Jan 23 14:17:39 crc kubenswrapper[4775]: E0123 14:17:39.205121 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovnkube-controller" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205126 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovnkube-controller" Jan 23 14:17:39 crc kubenswrapper[4775]: E0123 14:17:39.205136 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="kubecfg-setup" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205142 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="kubecfg-setup" Jan 23 14:17:39 crc kubenswrapper[4775]: E0123 14:17:39.205153 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovn-acl-logging" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205158 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovn-acl-logging" Jan 23 14:17:39 crc kubenswrapper[4775]: E0123 14:17:39.205169 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="kube-rbac-proxy-node" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205175 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="kube-rbac-proxy-node" Jan 23 14:17:39 crc kubenswrapper[4775]: E0123 14:17:39.205186 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovnkube-controller" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205192 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovnkube-controller" Jan 23 14:17:39 crc kubenswrapper[4775]: E0123 14:17:39.205198 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="northd" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205204 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="northd" Jan 23 14:17:39 crc kubenswrapper[4775]: E0123 14:17:39.205213 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="kube-rbac-proxy-ovn-metrics" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205218 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="kube-rbac-proxy-ovn-metrics" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205296 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="sbdb" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205307 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovnkube-controller" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205313 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovnkube-controller" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205320 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovnkube-controller" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205326 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovnkube-controller" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205335 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovnkube-controller" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205341 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="kube-rbac-proxy-node" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205347 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovn-acl-logging" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205353 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="kube-rbac-proxy-ovn-metrics" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205361 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="nbdb" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205369 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovn-controller" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205375 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="northd" Jan 23 14:17:39 crc kubenswrapper[4775]: E0123 14:17:39.205454 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovnkube-controller" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205460 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovnkube-controller" Jan 23 14:17:39 crc kubenswrapper[4775]: E0123 14:17:39.205624 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovnkube-controller" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.205632 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerName="ovnkube-controller" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.207001 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220332 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-env-overrides\") pod \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220409 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-ovnkube-script-lib\") pod \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220450 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-run-openvswitch\") pod \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220481 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-var-lib-openvswitch\") pod \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220509 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-node-log\") pod \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220533 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" (UID: "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220536 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-var-lib-cni-networks-ovn-kubernetes\") pod \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220559 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" (UID: "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220578 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-ovn-node-metrics-cert\") pod \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220577 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-node-log" (OuterVolumeSpecName: "node-log") pod "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" (UID: "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220604 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-run-ovn\") pod \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220670 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-ovnkube-config\") pod \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220700 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-etc-openvswitch\") pod \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220747 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-kubelet\") pod \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220603 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" (UID: "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220712 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" (UID: "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220756 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" (UID: "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220774 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-cni-bin\") pod \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220823 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-run-netns\") pod \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220828 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" (UID: "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220842 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-slash\") pod \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220839 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" (UID: "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220868 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-slash" (OuterVolumeSpecName: "host-slash") pod "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" (UID: "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220867 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" (UID: "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220878 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" (UID: "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220863 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-systemd-units\") pod \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220938 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-run-systemd\") pod \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220854 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" (UID: "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220840 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" (UID: "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220968 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-cni-netd\") pod \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.220993 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-log-socket\") pod \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221015 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-run-ovn-kubernetes\") pod \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221043 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6jls\" (UniqueName: \"kubernetes.io/projected/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-kube-api-access-d6jls\") pod \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\" (UID: \"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06\") " Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221085 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-log-socket" (OuterVolumeSpecName: "log-socket") pod "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" (UID: "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221098 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" (UID: "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221099 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" (UID: "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221127 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" (UID: "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221206 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-var-lib-openvswitch\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221240 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-ovn-node-metrics-cert\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221258 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-host-cni-netd\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221281 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-host-run-netns\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221337 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-ovnkube-script-lib\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221369 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-run-systemd\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221391 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-host-cni-bin\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221419 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-env-overrides\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221445 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-host-kubelet\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221476 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-run-openvswitch\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221540 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-systemd-units\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221579 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221677 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-ovnkube-config\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221732 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-run-ovn\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221758 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-node-log\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221794 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-host-run-ovn-kubernetes\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221839 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-host-slash\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221865 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-log-socket\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221888 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-etc-openvswitch\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221908 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm8td\" (UniqueName: \"kubernetes.io/projected/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-kube-api-access-hm8td\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.221968 4775 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.222010 4775 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.222026 4775 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.222036 4775 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-slash\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.222049 4775 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.222058 4775 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.222068 4775 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-log-socket\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.222078 4775 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.222087 4775 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.222097 4775 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.222105 4775 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.222114 4775 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.222122 4775 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-node-log\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.222131 4775 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.222140 4775 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.222149 4775 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.222159 4775 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.225908 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" (UID: "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.225927 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-kube-api-access-d6jls" (OuterVolumeSpecName: "kube-api-access-d6jls") pod "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" (UID: "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06"). InnerVolumeSpecName "kube-api-access-d6jls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.237452 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" (UID: "bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324034 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-node-log\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324192 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-host-run-ovn-kubernetes\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324210 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-node-log\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324248 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-host-slash\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324323 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-log-socket\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324341 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-host-run-ovn-kubernetes\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324379 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-etc-openvswitch\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324409 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-host-slash\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324432 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm8td\" (UniqueName: \"kubernetes.io/projected/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-kube-api-access-hm8td\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324443 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-log-socket\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324445 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-etc-openvswitch\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324565 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-var-lib-openvswitch\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324597 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-var-lib-openvswitch\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324622 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-host-cni-netd\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324696 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-ovn-node-metrics-cert\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324716 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-host-cni-netd\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324749 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-host-run-netns\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324833 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-host-run-netns\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324799 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-ovnkube-script-lib\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324882 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-run-systemd\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324900 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-host-cni-bin\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324933 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-env-overrides\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324967 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-host-kubelet\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324986 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-run-openvswitch\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.325000 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-host-cni-bin\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.325022 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-host-kubelet\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.325041 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-systemd-units\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.325018 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-systemd-units\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.324985 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-run-systemd\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.325082 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.325096 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-run-openvswitch\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.325118 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-ovnkube-config\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.325157 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.325165 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-run-ovn\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.325200 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-run-ovn\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.325241 4775 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.325256 4775 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.325265 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6jls\" (UniqueName: \"kubernetes.io/projected/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06-kube-api-access-d6jls\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.325512 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-env-overrides\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.326063 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-ovnkube-config\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.326298 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-ovnkube-script-lib\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.328266 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-ovn-node-metrics-cert\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.347499 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm8td\" (UniqueName: \"kubernetes.io/projected/38c6f656-0f2d-4615-821c-f4aee4c9e2c3-kube-api-access-hm8td\") pod \"ovnkube-node-vdg25\" (UID: \"38c6f656-0f2d-4615-821c-f4aee4c9e2c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.519050 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:39 crc kubenswrapper[4775]: W0123 14:17:39.553465 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38c6f656_0f2d_4615_821c_f4aee4c9e2c3.slice/crio-863dd4cad80cb5b8f4a6a99dc850f0b89d6aa4c0d645ce215c26b6b1ea965b87 WatchSource:0}: Error finding container 863dd4cad80cb5b8f4a6a99dc850f0b89d6aa4c0d645ce215c26b6b1ea965b87: Status 404 returned error can't find the container with id 863dd4cad80cb5b8f4a6a99dc850f0b89d6aa4c0d645ce215c26b6b1ea965b87 Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.636870 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hpxpf_ba4447c0-bada-49eb-b6b4-b25feff627a9/kube-multus/2.log" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.637229 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hpxpf_ba4447c0-bada-49eb-b6b4-b25feff627a9/kube-multus/1.log" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.637261 4775 generic.go:334] "Generic (PLEG): container finished" podID="ba4447c0-bada-49eb-b6b4-b25feff627a9" containerID="555e839180bbda237f6205ae573637b3ee9ad39df04b574cb5b7216b7c451510" exitCode=2 Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.637308 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hpxpf" event={"ID":"ba4447c0-bada-49eb-b6b4-b25feff627a9","Type":"ContainerDied","Data":"555e839180bbda237f6205ae573637b3ee9ad39df04b574cb5b7216b7c451510"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.637344 4775 scope.go:117] "RemoveContainer" containerID="8f14be984531a60487db2daba36d9cba7f2bbafa8b8d68889c261f3b2260f058" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.637766 4775 scope.go:117] "RemoveContainer" containerID="555e839180bbda237f6205ae573637b3ee9ad39df04b574cb5b7216b7c451510" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.641639 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qrvs8_bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06/ovnkube-controller/3.log" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.644614 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qrvs8_bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06/ovn-acl-logging/0.log" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645094 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qrvs8_bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06/ovn-controller/0.log" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645377 4775 generic.go:334] "Generic (PLEG): container finished" podID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerID="9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481" exitCode=0 Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645398 4775 generic.go:334] "Generic (PLEG): container finished" podID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerID="1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c" exitCode=0 Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645406 4775 generic.go:334] "Generic (PLEG): container finished" podID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerID="dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c" exitCode=0 Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645413 4775 generic.go:334] "Generic (PLEG): container finished" podID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerID="a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316" exitCode=0 Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645422 4775 generic.go:334] "Generic (PLEG): container finished" podID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerID="efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14" exitCode=0 Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645430 4775 generic.go:334] "Generic (PLEG): container finished" podID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerID="8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6" exitCode=0 Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645437 4775 generic.go:334] "Generic (PLEG): container finished" podID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerID="209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028" exitCode=143 Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645444 4775 generic.go:334] "Generic (PLEG): container finished" podID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" containerID="1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a" exitCode=143 Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645481 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerDied","Data":"9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645506 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerDied","Data":"1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645516 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerDied","Data":"dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645525 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerDied","Data":"a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645534 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerDied","Data":"efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645543 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerDied","Data":"8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645553 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645562 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645568 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645575 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645581 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645586 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645592 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645597 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645602 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645607 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645614 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerDied","Data":"209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645621 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645627 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645633 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645638 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645644 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645648 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645654 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645659 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645664 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645668 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645675 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerDied","Data":"1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645682 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645688 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645693 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645698 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645703 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645708 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645713 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645718 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645722 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645727 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645734 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" event={"ID":"bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06","Type":"ContainerDied","Data":"c9b1bad48b28a1f69c2c2d6ac40d31127808a59f11181daf49f1fb5d9684dc62"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645741 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645748 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645752 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645757 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645763 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645768 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645773 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645777 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645782 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645787 4775 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.645886 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qrvs8" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.652117 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" event={"ID":"38c6f656-0f2d-4615-821c-f4aee4c9e2c3","Type":"ContainerStarted","Data":"863dd4cad80cb5b8f4a6a99dc850f0b89d6aa4c0d645ce215c26b6b1ea965b87"} Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.682723 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qrvs8"] Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.683932 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qrvs8"] Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.720232 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06" path="/var/lib/kubelet/pods/bd5906e8-fa10-4ad1-b8c2-6bf9d00a9c06/volumes" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.893392 4775 scope.go:117] "RemoveContainer" containerID="9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.913473 4775 scope.go:117] "RemoveContainer" containerID="705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.950090 4775 scope.go:117] "RemoveContainer" containerID="1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c" Jan 23 14:17:39 crc kubenswrapper[4775]: I0123 14:17:39.968457 4775 scope.go:117] "RemoveContainer" containerID="dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.037529 4775 scope.go:117] "RemoveContainer" containerID="a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.056424 4775 scope.go:117] "RemoveContainer" containerID="efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.075232 4775 scope.go:117] "RemoveContainer" containerID="8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.099714 4775 scope.go:117] "RemoveContainer" containerID="209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.123115 4775 scope.go:117] "RemoveContainer" containerID="1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.146752 4775 scope.go:117] "RemoveContainer" containerID="684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.165509 4775 scope.go:117] "RemoveContainer" containerID="9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481" Jan 23 14:17:40 crc kubenswrapper[4775]: E0123 14:17:40.169248 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481\": container with ID starting with 9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481 not found: ID does not exist" containerID="9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.169305 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481"} err="failed to get container status \"9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481\": rpc error: code = NotFound desc = could not find container \"9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481\": container with ID starting with 9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.169340 4775 scope.go:117] "RemoveContainer" containerID="705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157" Jan 23 14:17:40 crc kubenswrapper[4775]: E0123 14:17:40.169594 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157\": container with ID starting with 705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157 not found: ID does not exist" containerID="705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.169625 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157"} err="failed to get container status \"705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157\": rpc error: code = NotFound desc = could not find container \"705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157\": container with ID starting with 705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.169640 4775 scope.go:117] "RemoveContainer" containerID="1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c" Jan 23 14:17:40 crc kubenswrapper[4775]: E0123 14:17:40.170077 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\": container with ID starting with 1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c not found: ID does not exist" containerID="1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.170124 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c"} err="failed to get container status \"1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\": rpc error: code = NotFound desc = could not find container \"1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\": container with ID starting with 1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.170151 4775 scope.go:117] "RemoveContainer" containerID="dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c" Jan 23 14:17:40 crc kubenswrapper[4775]: E0123 14:17:40.170640 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\": container with ID starting with dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c not found: ID does not exist" containerID="dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.170665 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c"} err="failed to get container status \"dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\": rpc error: code = NotFound desc = could not find container \"dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\": container with ID starting with dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.170681 4775 scope.go:117] "RemoveContainer" containerID="a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316" Jan 23 14:17:40 crc kubenswrapper[4775]: E0123 14:17:40.171132 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\": container with ID starting with a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316 not found: ID does not exist" containerID="a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.171180 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316"} err="failed to get container status \"a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\": rpc error: code = NotFound desc = could not find container \"a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\": container with ID starting with a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.171208 4775 scope.go:117] "RemoveContainer" containerID="efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14" Jan 23 14:17:40 crc kubenswrapper[4775]: E0123 14:17:40.171567 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\": container with ID starting with efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14 not found: ID does not exist" containerID="efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.171589 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14"} err="failed to get container status \"efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\": rpc error: code = NotFound desc = could not find container \"efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\": container with ID starting with efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.171602 4775 scope.go:117] "RemoveContainer" containerID="8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6" Jan 23 14:17:40 crc kubenswrapper[4775]: E0123 14:17:40.171931 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\": container with ID starting with 8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6 not found: ID does not exist" containerID="8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.171958 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6"} err="failed to get container status \"8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\": rpc error: code = NotFound desc = could not find container \"8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\": container with ID starting with 8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.171976 4775 scope.go:117] "RemoveContainer" containerID="209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028" Jan 23 14:17:40 crc kubenswrapper[4775]: E0123 14:17:40.172328 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\": container with ID starting with 209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028 not found: ID does not exist" containerID="209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.172355 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028"} err="failed to get container status \"209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\": rpc error: code = NotFound desc = could not find container \"209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\": container with ID starting with 209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.172384 4775 scope.go:117] "RemoveContainer" containerID="1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a" Jan 23 14:17:40 crc kubenswrapper[4775]: E0123 14:17:40.173777 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\": container with ID starting with 1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a not found: ID does not exist" containerID="1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.173834 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a"} err="failed to get container status \"1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\": rpc error: code = NotFound desc = could not find container \"1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\": container with ID starting with 1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.173850 4775 scope.go:117] "RemoveContainer" containerID="684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40" Jan 23 14:17:40 crc kubenswrapper[4775]: E0123 14:17:40.174161 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\": container with ID starting with 684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40 not found: ID does not exist" containerID="684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.174192 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40"} err="failed to get container status \"684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\": rpc error: code = NotFound desc = could not find container \"684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\": container with ID starting with 684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.174208 4775 scope.go:117] "RemoveContainer" containerID="9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.175757 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481"} err="failed to get container status \"9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481\": rpc error: code = NotFound desc = could not find container \"9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481\": container with ID starting with 9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.175858 4775 scope.go:117] "RemoveContainer" containerID="705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.176153 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157"} err="failed to get container status \"705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157\": rpc error: code = NotFound desc = could not find container \"705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157\": container with ID starting with 705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.176175 4775 scope.go:117] "RemoveContainer" containerID="1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.176408 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c"} err="failed to get container status \"1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\": rpc error: code = NotFound desc = could not find container \"1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\": container with ID starting with 1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.176452 4775 scope.go:117] "RemoveContainer" containerID="dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.181025 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c"} err="failed to get container status \"dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\": rpc error: code = NotFound desc = could not find container \"dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\": container with ID starting with dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.181111 4775 scope.go:117] "RemoveContainer" containerID="a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.181603 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316"} err="failed to get container status \"a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\": rpc error: code = NotFound desc = could not find container \"a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\": container with ID starting with a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.181648 4775 scope.go:117] "RemoveContainer" containerID="efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.181904 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14"} err="failed to get container status \"efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\": rpc error: code = NotFound desc = could not find container \"efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\": container with ID starting with efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.181932 4775 scope.go:117] "RemoveContainer" containerID="8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.182227 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6"} err="failed to get container status \"8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\": rpc error: code = NotFound desc = could not find container \"8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\": container with ID starting with 8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.182272 4775 scope.go:117] "RemoveContainer" containerID="209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.182618 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028"} err="failed to get container status \"209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\": rpc error: code = NotFound desc = could not find container \"209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\": container with ID starting with 209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.182652 4775 scope.go:117] "RemoveContainer" containerID="1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.182964 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a"} err="failed to get container status \"1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\": rpc error: code = NotFound desc = could not find container \"1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\": container with ID starting with 1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.182994 4775 scope.go:117] "RemoveContainer" containerID="684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.183239 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40"} err="failed to get container status \"684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\": rpc error: code = NotFound desc = could not find container \"684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\": container with ID starting with 684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.183264 4775 scope.go:117] "RemoveContainer" containerID="9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.183449 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481"} err="failed to get container status \"9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481\": rpc error: code = NotFound desc = could not find container \"9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481\": container with ID starting with 9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.183473 4775 scope.go:117] "RemoveContainer" containerID="705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.183863 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157"} err="failed to get container status \"705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157\": rpc error: code = NotFound desc = could not find container \"705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157\": container with ID starting with 705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.183895 4775 scope.go:117] "RemoveContainer" containerID="1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.184356 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c"} err="failed to get container status \"1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\": rpc error: code = NotFound desc = could not find container \"1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\": container with ID starting with 1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.184396 4775 scope.go:117] "RemoveContainer" containerID="dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.184945 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c"} err="failed to get container status \"dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\": rpc error: code = NotFound desc = could not find container \"dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\": container with ID starting with dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.184979 4775 scope.go:117] "RemoveContainer" containerID="a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.185418 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316"} err="failed to get container status \"a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\": rpc error: code = NotFound desc = could not find container \"a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\": container with ID starting with a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.185472 4775 scope.go:117] "RemoveContainer" containerID="efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.185850 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14"} err="failed to get container status \"efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\": rpc error: code = NotFound desc = could not find container \"efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\": container with ID starting with efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.185878 4775 scope.go:117] "RemoveContainer" containerID="8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.186192 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6"} err="failed to get container status \"8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\": rpc error: code = NotFound desc = could not find container \"8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\": container with ID starting with 8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.186230 4775 scope.go:117] "RemoveContainer" containerID="209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.186528 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028"} err="failed to get container status \"209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\": rpc error: code = NotFound desc = could not find container \"209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\": container with ID starting with 209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.186551 4775 scope.go:117] "RemoveContainer" containerID="1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.186856 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a"} err="failed to get container status \"1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\": rpc error: code = NotFound desc = could not find container \"1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\": container with ID starting with 1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.186889 4775 scope.go:117] "RemoveContainer" containerID="684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.189508 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40"} err="failed to get container status \"684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\": rpc error: code = NotFound desc = could not find container \"684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\": container with ID starting with 684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.189548 4775 scope.go:117] "RemoveContainer" containerID="9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.190052 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481"} err="failed to get container status \"9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481\": rpc error: code = NotFound desc = could not find container \"9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481\": container with ID starting with 9cfa722113ffa24afa13db99ab2154d99907f2f97b8775f0d20c32582b0ee481 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.190096 4775 scope.go:117] "RemoveContainer" containerID="705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.190524 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157"} err="failed to get container status \"705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157\": rpc error: code = NotFound desc = could not find container \"705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157\": container with ID starting with 705e5e63073fc9c3e2efda6b3c6fff7004f1d67a5cab5204d3670039ea832157 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.190553 4775 scope.go:117] "RemoveContainer" containerID="1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.190979 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c"} err="failed to get container status \"1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\": rpc error: code = NotFound desc = could not find container \"1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c\": container with ID starting with 1476f55f17d3f2641686601941333f3b0524140b694c4652707094bd868a360c not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.191005 4775 scope.go:117] "RemoveContainer" containerID="dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.191274 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c"} err="failed to get container status \"dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\": rpc error: code = NotFound desc = could not find container \"dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c\": container with ID starting with dae5aaddaa024c74ed21e37bbe82a7e2e7683abbcfdecbc189f1451940e0767c not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.191306 4775 scope.go:117] "RemoveContainer" containerID="a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.191707 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316"} err="failed to get container status \"a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\": rpc error: code = NotFound desc = could not find container \"a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316\": container with ID starting with a60a595155c1d9838fc663a4648a6a2898fb21462a4038184ae68273dbcce316 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.191734 4775 scope.go:117] "RemoveContainer" containerID="efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.192108 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14"} err="failed to get container status \"efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\": rpc error: code = NotFound desc = could not find container \"efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14\": container with ID starting with efd4d52a168f9341f50143976c70e15a339769d13acc44270a2c85e7ff26bb14 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.192133 4775 scope.go:117] "RemoveContainer" containerID="8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.195881 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6"} err="failed to get container status \"8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\": rpc error: code = NotFound desc = could not find container \"8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6\": container with ID starting with 8638e74de0d0ee2ecbe4751644986918f8cc1d4866ec70fb134303627e079de6 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.195916 4775 scope.go:117] "RemoveContainer" containerID="209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.196816 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028"} err="failed to get container status \"209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\": rpc error: code = NotFound desc = could not find container \"209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028\": container with ID starting with 209b1b1723721cbc1353b6aff50cb06bf894da7a3498c962cd302272cb673028 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.196840 4775 scope.go:117] "RemoveContainer" containerID="1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.197225 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a"} err="failed to get container status \"1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\": rpc error: code = NotFound desc = could not find container \"1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a\": container with ID starting with 1ef46c6f5e51161943625c0f595a146ad9bac1ff749bbaa72db3a6ee0936f86a not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.197243 4775 scope.go:117] "RemoveContainer" containerID="684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.197584 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40"} err="failed to get container status \"684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\": rpc error: code = NotFound desc = could not find container \"684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40\": container with ID starting with 684fcb88699e25b9ae17ab6e2fa4571ee4ae5c8622b458b402f2d7f5deeb8e40 not found: ID does not exist" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.665363 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hpxpf_ba4447c0-bada-49eb-b6b4-b25feff627a9/kube-multus/2.log" Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.665530 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hpxpf" event={"ID":"ba4447c0-bada-49eb-b6b4-b25feff627a9","Type":"ContainerStarted","Data":"35159c6e24dab15d013038099a26fcbb008c7f6a1f958150f802dcc8702b8506"} Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.671430 4775 generic.go:334] "Generic (PLEG): container finished" podID="38c6f656-0f2d-4615-821c-f4aee4c9e2c3" containerID="9187daf8b686d8372af7baba945baeca89c0029515684f3dc91d3d96357f2bd9" exitCode=0 Jan 23 14:17:40 crc kubenswrapper[4775]: I0123 14:17:40.671638 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" event={"ID":"38c6f656-0f2d-4615-821c-f4aee4c9e2c3","Type":"ContainerDied","Data":"9187daf8b686d8372af7baba945baeca89c0029515684f3dc91d3d96357f2bd9"} Jan 23 14:17:41 crc kubenswrapper[4775]: I0123 14:17:41.681444 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" event={"ID":"38c6f656-0f2d-4615-821c-f4aee4c9e2c3","Type":"ContainerStarted","Data":"c35e5d38448e4975a03c78fd7f211148dd37ac8f2cda317b3c4153145a65cc9a"} Jan 23 14:17:41 crc kubenswrapper[4775]: I0123 14:17:41.682094 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" event={"ID":"38c6f656-0f2d-4615-821c-f4aee4c9e2c3","Type":"ContainerStarted","Data":"8cb457df4eac4e589747ab484a5f807a39d18142f6cfa48cc0ff893acd85c539"} Jan 23 14:17:41 crc kubenswrapper[4775]: I0123 14:17:41.682107 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" event={"ID":"38c6f656-0f2d-4615-821c-f4aee4c9e2c3","Type":"ContainerStarted","Data":"460bdc7dc880ad11053b17f10bffd40511d00b069076ba0c6d97ebdded4c96d4"} Jan 23 14:17:41 crc kubenswrapper[4775]: I0123 14:17:41.682114 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" event={"ID":"38c6f656-0f2d-4615-821c-f4aee4c9e2c3","Type":"ContainerStarted","Data":"985dc868e6f32f76610ed213e4cd7a7c2421f288864e26c3fa1b4e44980591be"} Jan 23 14:17:41 crc kubenswrapper[4775]: I0123 14:17:41.682126 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" event={"ID":"38c6f656-0f2d-4615-821c-f4aee4c9e2c3","Type":"ContainerStarted","Data":"68b5e32ac49da3b1c5b9c57952bd51944a321e88012f4a40c374887d5bab9567"} Jan 23 14:17:41 crc kubenswrapper[4775]: I0123 14:17:41.682134 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" event={"ID":"38c6f656-0f2d-4615-821c-f4aee4c9e2c3","Type":"ContainerStarted","Data":"8c8608b88344e136cf1fa734e7f5bdebac9bd2bc412bd4c50f402123db06cd65"} Jan 23 14:17:41 crc kubenswrapper[4775]: I0123 14:17:41.898887 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lflht" Jan 23 14:17:41 crc kubenswrapper[4775]: I0123 14:17:41.899445 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lflht" Jan 23 14:17:41 crc kubenswrapper[4775]: I0123 14:17:41.948227 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lflht" Jan 23 14:17:42 crc kubenswrapper[4775]: I0123 14:17:42.733272 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lflht" Jan 23 14:17:44 crc kubenswrapper[4775]: I0123 14:17:44.336322 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lflht"] Jan 23 14:17:44 crc kubenswrapper[4775]: I0123 14:17:44.705728 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" event={"ID":"38c6f656-0f2d-4615-821c-f4aee4c9e2c3","Type":"ContainerStarted","Data":"a9de6b148a62eab5a9ad5811a12641fe4c9c27f232074aac3906b4345df10b51"} Jan 23 14:17:45 crc kubenswrapper[4775]: I0123 14:17:45.713174 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lflht" podUID="748a9ff6-4b80-40f9-ae41-37bc66c272f6" containerName="registry-server" containerID="cri-o://f06d4f6767a81a7749fa41e7dfaa09c6b4cb54aa8866d79a23981a879ae6dde5" gracePeriod=2 Jan 23 14:17:46 crc kubenswrapper[4775]: I0123 14:17:46.725019 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" event={"ID":"38c6f656-0f2d-4615-821c-f4aee4c9e2c3","Type":"ContainerStarted","Data":"159bb01d7666ae4f6e0c86adb311016ed0b8ba730d61bfaab8249999a1a855b6"} Jan 23 14:17:46 crc kubenswrapper[4775]: I0123 14:17:46.725764 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:46 crc kubenswrapper[4775]: I0123 14:17:46.725779 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:46 crc kubenswrapper[4775]: I0123 14:17:46.753622 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:46 crc kubenswrapper[4775]: I0123 14:17:46.757083 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" podStartSLOduration=7.757068566 podStartE2EDuration="7.757068566s" podCreationTimestamp="2026-01-23 14:17:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:17:46.756707855 +0000 UTC m=+813.751536595" watchObservedRunningTime="2026-01-23 14:17:46.757068566 +0000 UTC m=+813.751897306" Jan 23 14:17:47 crc kubenswrapper[4775]: I0123 14:17:47.731416 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:47 crc kubenswrapper[4775]: I0123 14:17:47.774766 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:17:49 crc kubenswrapper[4775]: I0123 14:17:49.745858 4775 generic.go:334] "Generic (PLEG): container finished" podID="748a9ff6-4b80-40f9-ae41-37bc66c272f6" containerID="f06d4f6767a81a7749fa41e7dfaa09c6b4cb54aa8866d79a23981a879ae6dde5" exitCode=0 Jan 23 14:17:49 crc kubenswrapper[4775]: I0123 14:17:49.745934 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lflht" event={"ID":"748a9ff6-4b80-40f9-ae41-37bc66c272f6","Type":"ContainerDied","Data":"f06d4f6767a81a7749fa41e7dfaa09c6b4cb54aa8866d79a23981a879ae6dde5"} Jan 23 14:17:51 crc kubenswrapper[4775]: I0123 14:17:51.392722 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lflht" Jan 23 14:17:51 crc kubenswrapper[4775]: I0123 14:17:51.508212 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/748a9ff6-4b80-40f9-ae41-37bc66c272f6-catalog-content\") pod \"748a9ff6-4b80-40f9-ae41-37bc66c272f6\" (UID: \"748a9ff6-4b80-40f9-ae41-37bc66c272f6\") " Jan 23 14:17:51 crc kubenswrapper[4775]: I0123 14:17:51.508267 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/748a9ff6-4b80-40f9-ae41-37bc66c272f6-utilities\") pod \"748a9ff6-4b80-40f9-ae41-37bc66c272f6\" (UID: \"748a9ff6-4b80-40f9-ae41-37bc66c272f6\") " Jan 23 14:17:51 crc kubenswrapper[4775]: I0123 14:17:51.508320 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gpcj\" (UniqueName: \"kubernetes.io/projected/748a9ff6-4b80-40f9-ae41-37bc66c272f6-kube-api-access-4gpcj\") pod \"748a9ff6-4b80-40f9-ae41-37bc66c272f6\" (UID: \"748a9ff6-4b80-40f9-ae41-37bc66c272f6\") " Jan 23 14:17:51 crc kubenswrapper[4775]: I0123 14:17:51.509692 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/748a9ff6-4b80-40f9-ae41-37bc66c272f6-utilities" (OuterVolumeSpecName: "utilities") pod "748a9ff6-4b80-40f9-ae41-37bc66c272f6" (UID: "748a9ff6-4b80-40f9-ae41-37bc66c272f6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:17:51 crc kubenswrapper[4775]: I0123 14:17:51.515757 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/748a9ff6-4b80-40f9-ae41-37bc66c272f6-kube-api-access-4gpcj" (OuterVolumeSpecName: "kube-api-access-4gpcj") pod "748a9ff6-4b80-40f9-ae41-37bc66c272f6" (UID: "748a9ff6-4b80-40f9-ae41-37bc66c272f6"). InnerVolumeSpecName "kube-api-access-4gpcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:17:51 crc kubenswrapper[4775]: I0123 14:17:51.610261 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/748a9ff6-4b80-40f9-ae41-37bc66c272f6-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:51 crc kubenswrapper[4775]: I0123 14:17:51.610519 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gpcj\" (UniqueName: \"kubernetes.io/projected/748a9ff6-4b80-40f9-ae41-37bc66c272f6-kube-api-access-4gpcj\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:51 crc kubenswrapper[4775]: I0123 14:17:51.657020 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/748a9ff6-4b80-40f9-ae41-37bc66c272f6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "748a9ff6-4b80-40f9-ae41-37bc66c272f6" (UID: "748a9ff6-4b80-40f9-ae41-37bc66c272f6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:17:51 crc kubenswrapper[4775]: I0123 14:17:51.711594 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/748a9ff6-4b80-40f9-ae41-37bc66c272f6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:17:51 crc kubenswrapper[4775]: I0123 14:17:51.761071 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lflht" event={"ID":"748a9ff6-4b80-40f9-ae41-37bc66c272f6","Type":"ContainerDied","Data":"93c922f1487ddf500d1f9351c5caa4eedc2618e3851fee725cf2afd7fd0be358"} Jan 23 14:17:51 crc kubenswrapper[4775]: I0123 14:17:51.761105 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lflht" Jan 23 14:17:51 crc kubenswrapper[4775]: I0123 14:17:51.761156 4775 scope.go:117] "RemoveContainer" containerID="f06d4f6767a81a7749fa41e7dfaa09c6b4cb54aa8866d79a23981a879ae6dde5" Jan 23 14:17:51 crc kubenswrapper[4775]: I0123 14:17:51.762832 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-gq778" event={"ID":"ebe0482d-2988-4f4d-929f-4c2980e19cf3","Type":"ContainerStarted","Data":"4a4c52e9e34702af099491e1040d0c536534dd8bfeeb011dd44cfac84f07079a"} Jan 23 14:17:51 crc kubenswrapper[4775]: I0123 14:17:51.792445 4775 scope.go:117] "RemoveContainer" containerID="61ce9ba1643c99fc37fc14a63747755de7afc6a9d3819f1c9a37d622b4cf7f7f" Jan 23 14:17:51 crc kubenswrapper[4775]: I0123 14:17:51.796947 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-gq778" podStartSLOduration=1.012735561 podStartE2EDuration="14.79691875s" podCreationTimestamp="2026-01-23 14:17:37 +0000 UTC" firstStartedPulling="2026-01-23 14:17:37.611194821 +0000 UTC m=+804.606023561" lastFinishedPulling="2026-01-23 14:17:51.39537797 +0000 UTC m=+818.390206750" observedRunningTime="2026-01-23 14:17:51.792829442 +0000 UTC m=+818.787658252" watchObservedRunningTime="2026-01-23 14:17:51.79691875 +0000 UTC m=+818.791747520" Jan 23 14:17:51 crc kubenswrapper[4775]: I0123 14:17:51.821728 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lflht"] Jan 23 14:17:51 crc kubenswrapper[4775]: I0123 14:17:51.829722 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lflht"] Jan 23 14:17:51 crc kubenswrapper[4775]: I0123 14:17:51.831772 4775 scope.go:117] "RemoveContainer" containerID="f97a99a72e6da74778e3548426a45903a3d520396f1383be0c6443f902f8596a" Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.807537 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-p7nxk"] Jan 23 14:17:52 crc kubenswrapper[4775]: E0123 14:17:52.807784 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="748a9ff6-4b80-40f9-ae41-37bc66c272f6" containerName="extract-content" Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.807851 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="748a9ff6-4b80-40f9-ae41-37bc66c272f6" containerName="extract-content" Jan 23 14:17:52 crc kubenswrapper[4775]: E0123 14:17:52.807867 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="748a9ff6-4b80-40f9-ae41-37bc66c272f6" containerName="extract-utilities" Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.807878 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="748a9ff6-4b80-40f9-ae41-37bc66c272f6" containerName="extract-utilities" Jan 23 14:17:52 crc kubenswrapper[4775]: E0123 14:17:52.807893 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="748a9ff6-4b80-40f9-ae41-37bc66c272f6" containerName="registry-server" Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.807901 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="748a9ff6-4b80-40f9-ae41-37bc66c272f6" containerName="registry-server" Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.808043 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="748a9ff6-4b80-40f9-ae41-37bc66c272f6" containerName="registry-server" Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.808690 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-p7nxk" Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.811410 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-nrpjz" Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.829293 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-rnbff"] Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.832188 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rnbff" Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.838752 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.844017 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-p7nxk"] Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.866074 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-wmglj"] Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.866880 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-wmglj" Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.881478 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-rnbff"] Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.925322 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6932e29c-8eac-4e0f-9516-c2e922655cbc-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-rnbff\" (UID: \"6932e29c-8eac-4e0f-9516-c2e922655cbc\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rnbff" Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.925379 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctk4g\" (UniqueName: \"kubernetes.io/projected/6932e29c-8eac-4e0f-9516-c2e922655cbc-kube-api-access-ctk4g\") pod \"nmstate-webhook-8474b5b9d8-rnbff\" (UID: \"6932e29c-8eac-4e0f-9516-c2e922655cbc\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rnbff" Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.925404 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/18100557-00ef-4de8-9a7f-df953190a9c6-ovs-socket\") pod \"nmstate-handler-wmglj\" (UID: \"18100557-00ef-4de8-9a7f-df953190a9c6\") " pod="openshift-nmstate/nmstate-handler-wmglj" Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.925492 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xz4m\" (UniqueName: \"kubernetes.io/projected/18100557-00ef-4de8-9a7f-df953190a9c6-kube-api-access-4xz4m\") pod \"nmstate-handler-wmglj\" (UID: \"18100557-00ef-4de8-9a7f-df953190a9c6\") " pod="openshift-nmstate/nmstate-handler-wmglj" Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.925544 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxb2k\" (UniqueName: \"kubernetes.io/projected/97726a36-cf4b-4688-b028-448734bd8c23-kube-api-access-qxb2k\") pod \"nmstate-metrics-54757c584b-p7nxk\" (UID: \"97726a36-cf4b-4688-b028-448734bd8c23\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-p7nxk" Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.925571 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/18100557-00ef-4de8-9a7f-df953190a9c6-nmstate-lock\") pod \"nmstate-handler-wmglj\" (UID: \"18100557-00ef-4de8-9a7f-df953190a9c6\") " pod="openshift-nmstate/nmstate-handler-wmglj" Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.925598 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/18100557-00ef-4de8-9a7f-df953190a9c6-dbus-socket\") pod \"nmstate-handler-wmglj\" (UID: \"18100557-00ef-4de8-9a7f-df953190a9c6\") " pod="openshift-nmstate/nmstate-handler-wmglj" Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.944828 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-w5xfs"] Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.945559 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-w5xfs" Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.948038 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-srpxs" Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.948302 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.950104 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 23 14:17:52 crc kubenswrapper[4775]: I0123 14:17:52.960869 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-w5xfs"] Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.026466 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xz4m\" (UniqueName: \"kubernetes.io/projected/18100557-00ef-4de8-9a7f-df953190a9c6-kube-api-access-4xz4m\") pod \"nmstate-handler-wmglj\" (UID: \"18100557-00ef-4de8-9a7f-df953190a9c6\") " pod="openshift-nmstate/nmstate-handler-wmglj" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.026521 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e932364d-5f85-43fd-ba05-f4e0934482c2-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-w5xfs\" (UID: \"e932364d-5f85-43fd-ba05-f4e0934482c2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-w5xfs" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.026542 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxb2k\" (UniqueName: \"kubernetes.io/projected/97726a36-cf4b-4688-b028-448734bd8c23-kube-api-access-qxb2k\") pod \"nmstate-metrics-54757c584b-p7nxk\" (UID: \"97726a36-cf4b-4688-b028-448734bd8c23\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-p7nxk" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.026559 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/18100557-00ef-4de8-9a7f-df953190a9c6-nmstate-lock\") pod \"nmstate-handler-wmglj\" (UID: \"18100557-00ef-4de8-9a7f-df953190a9c6\") " pod="openshift-nmstate/nmstate-handler-wmglj" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.026576 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/18100557-00ef-4de8-9a7f-df953190a9c6-dbus-socket\") pod \"nmstate-handler-wmglj\" (UID: \"18100557-00ef-4de8-9a7f-df953190a9c6\") " pod="openshift-nmstate/nmstate-handler-wmglj" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.026602 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbfnj\" (UniqueName: \"kubernetes.io/projected/e932364d-5f85-43fd-ba05-f4e0934482c2-kube-api-access-wbfnj\") pod \"nmstate-console-plugin-7754f76f8b-w5xfs\" (UID: \"e932364d-5f85-43fd-ba05-f4e0934482c2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-w5xfs" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.026618 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e932364d-5f85-43fd-ba05-f4e0934482c2-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-w5xfs\" (UID: \"e932364d-5f85-43fd-ba05-f4e0934482c2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-w5xfs" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.026635 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6932e29c-8eac-4e0f-9516-c2e922655cbc-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-rnbff\" (UID: \"6932e29c-8eac-4e0f-9516-c2e922655cbc\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rnbff" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.026680 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctk4g\" (UniqueName: \"kubernetes.io/projected/6932e29c-8eac-4e0f-9516-c2e922655cbc-kube-api-access-ctk4g\") pod \"nmstate-webhook-8474b5b9d8-rnbff\" (UID: \"6932e29c-8eac-4e0f-9516-c2e922655cbc\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rnbff" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.026699 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/18100557-00ef-4de8-9a7f-df953190a9c6-ovs-socket\") pod \"nmstate-handler-wmglj\" (UID: \"18100557-00ef-4de8-9a7f-df953190a9c6\") " pod="openshift-nmstate/nmstate-handler-wmglj" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.026722 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/18100557-00ef-4de8-9a7f-df953190a9c6-nmstate-lock\") pod \"nmstate-handler-wmglj\" (UID: \"18100557-00ef-4de8-9a7f-df953190a9c6\") " pod="openshift-nmstate/nmstate-handler-wmglj" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.026761 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/18100557-00ef-4de8-9a7f-df953190a9c6-ovs-socket\") pod \"nmstate-handler-wmglj\" (UID: \"18100557-00ef-4de8-9a7f-df953190a9c6\") " pod="openshift-nmstate/nmstate-handler-wmglj" Jan 23 14:17:53 crc kubenswrapper[4775]: E0123 14:17:53.026940 4775 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.026982 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/18100557-00ef-4de8-9a7f-df953190a9c6-dbus-socket\") pod \"nmstate-handler-wmglj\" (UID: \"18100557-00ef-4de8-9a7f-df953190a9c6\") " pod="openshift-nmstate/nmstate-handler-wmglj" Jan 23 14:17:53 crc kubenswrapper[4775]: E0123 14:17:53.027026 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6932e29c-8eac-4e0f-9516-c2e922655cbc-tls-key-pair podName:6932e29c-8eac-4e0f-9516-c2e922655cbc nodeName:}" failed. No retries permitted until 2026-01-23 14:17:53.526991664 +0000 UTC m=+820.521820444 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/6932e29c-8eac-4e0f-9516-c2e922655cbc-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-rnbff" (UID: "6932e29c-8eac-4e0f-9516-c2e922655cbc") : secret "openshift-nmstate-webhook" not found Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.050331 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctk4g\" (UniqueName: \"kubernetes.io/projected/6932e29c-8eac-4e0f-9516-c2e922655cbc-kube-api-access-ctk4g\") pod \"nmstate-webhook-8474b5b9d8-rnbff\" (UID: \"6932e29c-8eac-4e0f-9516-c2e922655cbc\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rnbff" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.059279 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xz4m\" (UniqueName: \"kubernetes.io/projected/18100557-00ef-4de8-9a7f-df953190a9c6-kube-api-access-4xz4m\") pod \"nmstate-handler-wmglj\" (UID: \"18100557-00ef-4de8-9a7f-df953190a9c6\") " pod="openshift-nmstate/nmstate-handler-wmglj" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.064499 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxb2k\" (UniqueName: \"kubernetes.io/projected/97726a36-cf4b-4688-b028-448734bd8c23-kube-api-access-qxb2k\") pod \"nmstate-metrics-54757c584b-p7nxk\" (UID: \"97726a36-cf4b-4688-b028-448734bd8c23\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-p7nxk" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.127861 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e932364d-5f85-43fd-ba05-f4e0934482c2-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-w5xfs\" (UID: \"e932364d-5f85-43fd-ba05-f4e0934482c2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-w5xfs" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.127947 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbfnj\" (UniqueName: \"kubernetes.io/projected/e932364d-5f85-43fd-ba05-f4e0934482c2-kube-api-access-wbfnj\") pod \"nmstate-console-plugin-7754f76f8b-w5xfs\" (UID: \"e932364d-5f85-43fd-ba05-f4e0934482c2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-w5xfs" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.127994 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e932364d-5f85-43fd-ba05-f4e0934482c2-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-w5xfs\" (UID: \"e932364d-5f85-43fd-ba05-f4e0934482c2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-w5xfs" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.128900 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e932364d-5f85-43fd-ba05-f4e0934482c2-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-w5xfs\" (UID: \"e932364d-5f85-43fd-ba05-f4e0934482c2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-w5xfs" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.131288 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7c865d7849-c7sv9"] Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.131958 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.134204 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-p7nxk" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.135795 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e932364d-5f85-43fd-ba05-f4e0934482c2-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-w5xfs\" (UID: \"e932364d-5f85-43fd-ba05-f4e0934482c2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-w5xfs" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.156134 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbfnj\" (UniqueName: \"kubernetes.io/projected/e932364d-5f85-43fd-ba05-f4e0934482c2-kube-api-access-wbfnj\") pod \"nmstate-console-plugin-7754f76f8b-w5xfs\" (UID: \"e932364d-5f85-43fd-ba05-f4e0934482c2\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-w5xfs" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.159038 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7c865d7849-c7sv9"] Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.182135 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-wmglj" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.229364 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq7mj\" (UniqueName: \"kubernetes.io/projected/efd2a7f1-33df-47f9-8482-153d9e0beeb8-kube-api-access-pq7mj\") pod \"console-7c865d7849-c7sv9\" (UID: \"efd2a7f1-33df-47f9-8482-153d9e0beeb8\") " pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.229411 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/efd2a7f1-33df-47f9-8482-153d9e0beeb8-console-oauth-config\") pod \"console-7c865d7849-c7sv9\" (UID: \"efd2a7f1-33df-47f9-8482-153d9e0beeb8\") " pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.229428 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/efd2a7f1-33df-47f9-8482-153d9e0beeb8-service-ca\") pod \"console-7c865d7849-c7sv9\" (UID: \"efd2a7f1-33df-47f9-8482-153d9e0beeb8\") " pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.229462 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efd2a7f1-33df-47f9-8482-153d9e0beeb8-trusted-ca-bundle\") pod \"console-7c865d7849-c7sv9\" (UID: \"efd2a7f1-33df-47f9-8482-153d9e0beeb8\") " pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.229573 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/efd2a7f1-33df-47f9-8482-153d9e0beeb8-console-serving-cert\") pod \"console-7c865d7849-c7sv9\" (UID: \"efd2a7f1-33df-47f9-8482-153d9e0beeb8\") " pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.229615 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/efd2a7f1-33df-47f9-8482-153d9e0beeb8-oauth-serving-cert\") pod \"console-7c865d7849-c7sv9\" (UID: \"efd2a7f1-33df-47f9-8482-153d9e0beeb8\") " pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.229653 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/efd2a7f1-33df-47f9-8482-153d9e0beeb8-console-config\") pod \"console-7c865d7849-c7sv9\" (UID: \"efd2a7f1-33df-47f9-8482-153d9e0beeb8\") " pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.261417 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-w5xfs" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.331253 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq7mj\" (UniqueName: \"kubernetes.io/projected/efd2a7f1-33df-47f9-8482-153d9e0beeb8-kube-api-access-pq7mj\") pod \"console-7c865d7849-c7sv9\" (UID: \"efd2a7f1-33df-47f9-8482-153d9e0beeb8\") " pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.331591 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/efd2a7f1-33df-47f9-8482-153d9e0beeb8-console-oauth-config\") pod \"console-7c865d7849-c7sv9\" (UID: \"efd2a7f1-33df-47f9-8482-153d9e0beeb8\") " pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.331616 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/efd2a7f1-33df-47f9-8482-153d9e0beeb8-service-ca\") pod \"console-7c865d7849-c7sv9\" (UID: \"efd2a7f1-33df-47f9-8482-153d9e0beeb8\") " pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.331651 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efd2a7f1-33df-47f9-8482-153d9e0beeb8-trusted-ca-bundle\") pod \"console-7c865d7849-c7sv9\" (UID: \"efd2a7f1-33df-47f9-8482-153d9e0beeb8\") " pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.331712 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/efd2a7f1-33df-47f9-8482-153d9e0beeb8-console-serving-cert\") pod \"console-7c865d7849-c7sv9\" (UID: \"efd2a7f1-33df-47f9-8482-153d9e0beeb8\") " pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.331739 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/efd2a7f1-33df-47f9-8482-153d9e0beeb8-oauth-serving-cert\") pod \"console-7c865d7849-c7sv9\" (UID: \"efd2a7f1-33df-47f9-8482-153d9e0beeb8\") " pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.331758 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/efd2a7f1-33df-47f9-8482-153d9e0beeb8-console-config\") pod \"console-7c865d7849-c7sv9\" (UID: \"efd2a7f1-33df-47f9-8482-153d9e0beeb8\") " pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.332618 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/efd2a7f1-33df-47f9-8482-153d9e0beeb8-console-config\") pod \"console-7c865d7849-c7sv9\" (UID: \"efd2a7f1-33df-47f9-8482-153d9e0beeb8\") " pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.334998 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/efd2a7f1-33df-47f9-8482-153d9e0beeb8-service-ca\") pod \"console-7c865d7849-c7sv9\" (UID: \"efd2a7f1-33df-47f9-8482-153d9e0beeb8\") " pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.335114 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/efd2a7f1-33df-47f9-8482-153d9e0beeb8-trusted-ca-bundle\") pod \"console-7c865d7849-c7sv9\" (UID: \"efd2a7f1-33df-47f9-8482-153d9e0beeb8\") " pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.335539 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/efd2a7f1-33df-47f9-8482-153d9e0beeb8-oauth-serving-cert\") pod \"console-7c865d7849-c7sv9\" (UID: \"efd2a7f1-33df-47f9-8482-153d9e0beeb8\") " pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.338160 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/efd2a7f1-33df-47f9-8482-153d9e0beeb8-console-oauth-config\") pod \"console-7c865d7849-c7sv9\" (UID: \"efd2a7f1-33df-47f9-8482-153d9e0beeb8\") " pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.338381 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/efd2a7f1-33df-47f9-8482-153d9e0beeb8-console-serving-cert\") pod \"console-7c865d7849-c7sv9\" (UID: \"efd2a7f1-33df-47f9-8482-153d9e0beeb8\") " pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.352457 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq7mj\" (UniqueName: \"kubernetes.io/projected/efd2a7f1-33df-47f9-8482-153d9e0beeb8-kube-api-access-pq7mj\") pod \"console-7c865d7849-c7sv9\" (UID: \"efd2a7f1-33df-47f9-8482-153d9e0beeb8\") " pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.394905 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-p7nxk"] Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.455788 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-w5xfs"] Jan 23 14:17:53 crc kubenswrapper[4775]: W0123 14:17:53.463229 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode932364d_5f85_43fd_ba05_f4e0934482c2.slice/crio-4dd5724bd923305009f412177d40b11433b669334335b9f9c2645a617f67b5fb WatchSource:0}: Error finding container 4dd5724bd923305009f412177d40b11433b669334335b9f9c2645a617f67b5fb: Status 404 returned error can't find the container with id 4dd5724bd923305009f412177d40b11433b669334335b9f9c2645a617f67b5fb Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.477826 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.535143 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6932e29c-8eac-4e0f-9516-c2e922655cbc-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-rnbff\" (UID: \"6932e29c-8eac-4e0f-9516-c2e922655cbc\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rnbff" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.539622 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6932e29c-8eac-4e0f-9516-c2e922655cbc-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-rnbff\" (UID: \"6932e29c-8eac-4e0f-9516-c2e922655cbc\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rnbff" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.646134 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7c865d7849-c7sv9"] Jan 23 14:17:53 crc kubenswrapper[4775]: W0123 14:17:53.652845 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefd2a7f1_33df_47f9_8482_153d9e0beeb8.slice/crio-1bf41652fc016f5e54c20860648b543ebeeb96e18b1ecd05adbcb88934b56fd8 WatchSource:0}: Error finding container 1bf41652fc016f5e54c20860648b543ebeeb96e18b1ecd05adbcb88934b56fd8: Status 404 returned error can't find the container with id 1bf41652fc016f5e54c20860648b543ebeeb96e18b1ecd05adbcb88934b56fd8 Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.727317 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="748a9ff6-4b80-40f9-ae41-37bc66c272f6" path="/var/lib/kubelet/pods/748a9ff6-4b80-40f9-ae41-37bc66c272f6/volumes" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.748328 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rnbff" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.779716 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-wmglj" event={"ID":"18100557-00ef-4de8-9a7f-df953190a9c6","Type":"ContainerStarted","Data":"b3d3c1c6fbdcb239d8fe5a45d103295d6b5151975c3b3e109348e94e483dc186"} Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.781242 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-p7nxk" event={"ID":"97726a36-cf4b-4688-b028-448734bd8c23","Type":"ContainerStarted","Data":"5154885423dddf6d8bc166f5cb8ad2830da14db2cf4b67f8388d24561954b5d1"} Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.782822 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-w5xfs" event={"ID":"e932364d-5f85-43fd-ba05-f4e0934482c2","Type":"ContainerStarted","Data":"4dd5724bd923305009f412177d40b11433b669334335b9f9c2645a617f67b5fb"} Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.785401 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7c865d7849-c7sv9" event={"ID":"efd2a7f1-33df-47f9-8482-153d9e0beeb8","Type":"ContainerStarted","Data":"1bf41652fc016f5e54c20860648b543ebeeb96e18b1ecd05adbcb88934b56fd8"} Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.807638 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7c865d7849-c7sv9" podStartSLOduration=0.807613653 podStartE2EDuration="807.613653ms" podCreationTimestamp="2026-01-23 14:17:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:17:53.801160466 +0000 UTC m=+820.795989216" watchObservedRunningTime="2026-01-23 14:17:53.807613653 +0000 UTC m=+820.802442393" Jan 23 14:17:53 crc kubenswrapper[4775]: I0123 14:17:53.932663 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-rnbff"] Jan 23 14:17:53 crc kubenswrapper[4775]: W0123 14:17:53.937783 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6932e29c_8eac_4e0f_9516_c2e922655cbc.slice/crio-a986bac49e693d9e72e9c8dbf7b7c599c4e39577058b90302709cc4643a5f372 WatchSource:0}: Error finding container a986bac49e693d9e72e9c8dbf7b7c599c4e39577058b90302709cc4643a5f372: Status 404 returned error can't find the container with id a986bac49e693d9e72e9c8dbf7b7c599c4e39577058b90302709cc4643a5f372 Jan 23 14:17:54 crc kubenswrapper[4775]: I0123 14:17:54.795753 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7c865d7849-c7sv9" event={"ID":"efd2a7f1-33df-47f9-8482-153d9e0beeb8","Type":"ContainerStarted","Data":"7a000b614dd77b084e202a456815f5889d89b5a3747f0f1e7dcec6cb0a9cbac0"} Jan 23 14:17:54 crc kubenswrapper[4775]: I0123 14:17:54.797304 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rnbff" event={"ID":"6932e29c-8eac-4e0f-9516-c2e922655cbc","Type":"ContainerStarted","Data":"a986bac49e693d9e72e9c8dbf7b7c599c4e39577058b90302709cc4643a5f372"} Jan 23 14:17:56 crc kubenswrapper[4775]: I0123 14:17:56.810949 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-wmglj" event={"ID":"18100557-00ef-4de8-9a7f-df953190a9c6","Type":"ContainerStarted","Data":"a7e9f3aa7ecded5d64bc1f143b905ca8ff72a0d3acd447775cf5da2d439fdb10"} Jan 23 14:17:56 crc kubenswrapper[4775]: I0123 14:17:56.811611 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-wmglj" Jan 23 14:17:56 crc kubenswrapper[4775]: I0123 14:17:56.815445 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-p7nxk" event={"ID":"97726a36-cf4b-4688-b028-448734bd8c23","Type":"ContainerStarted","Data":"cee3a8c344075934c867c0d811a1a781233b47794d730d65cb6e22db1f428fd1"} Jan 23 14:17:56 crc kubenswrapper[4775]: I0123 14:17:56.817610 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-w5xfs" event={"ID":"e932364d-5f85-43fd-ba05-f4e0934482c2","Type":"ContainerStarted","Data":"d45ba1f257acb413ce18702b33b023f421ebdf37d701613aaebe894fece57856"} Jan 23 14:17:56 crc kubenswrapper[4775]: I0123 14:17:56.820054 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rnbff" event={"ID":"6932e29c-8eac-4e0f-9516-c2e922655cbc","Type":"ContainerStarted","Data":"3dd0e39889d715e1a5db8f67a6044becfbd69078c63fdecb3b7683b832ef2076"} Jan 23 14:17:56 crc kubenswrapper[4775]: I0123 14:17:56.820259 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rnbff" Jan 23 14:17:56 crc kubenswrapper[4775]: I0123 14:17:56.831204 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-wmglj" podStartSLOduration=1.976204542 podStartE2EDuration="4.831186s" podCreationTimestamp="2026-01-23 14:17:52 +0000 UTC" firstStartedPulling="2026-01-23 14:17:53.200796853 +0000 UTC m=+820.195625593" lastFinishedPulling="2026-01-23 14:17:56.055778311 +0000 UTC m=+823.050607051" observedRunningTime="2026-01-23 14:17:56.830434788 +0000 UTC m=+823.825263538" watchObservedRunningTime="2026-01-23 14:17:56.831186 +0000 UTC m=+823.826014750" Jan 23 14:17:56 crc kubenswrapper[4775]: I0123 14:17:56.850254 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-w5xfs" podStartSLOduration=2.266333718 podStartE2EDuration="4.850233131s" podCreationTimestamp="2026-01-23 14:17:52 +0000 UTC" firstStartedPulling="2026-01-23 14:17:53.465372029 +0000 UTC m=+820.460200769" lastFinishedPulling="2026-01-23 14:17:56.049271442 +0000 UTC m=+823.044100182" observedRunningTime="2026-01-23 14:17:56.847635366 +0000 UTC m=+823.842464116" watchObservedRunningTime="2026-01-23 14:17:56.850233131 +0000 UTC m=+823.845061871" Jan 23 14:17:56 crc kubenswrapper[4775]: I0123 14:17:56.871470 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rnbff" podStartSLOduration=2.753711531 podStartE2EDuration="4.871453585s" podCreationTimestamp="2026-01-23 14:17:52 +0000 UTC" firstStartedPulling="2026-01-23 14:17:53.940284182 +0000 UTC m=+820.935112922" lastFinishedPulling="2026-01-23 14:17:56.058026236 +0000 UTC m=+823.052854976" observedRunningTime="2026-01-23 14:17:56.868462729 +0000 UTC m=+823.863291489" watchObservedRunningTime="2026-01-23 14:17:56.871453585 +0000 UTC m=+823.866282325" Jan 23 14:17:58 crc kubenswrapper[4775]: I0123 14:17:58.834583 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-p7nxk" event={"ID":"97726a36-cf4b-4688-b028-448734bd8c23","Type":"ContainerStarted","Data":"09e62661f220d80a4fa2df22aaf27835af3369938efb25fd34802463aa546832"} Jan 23 14:18:03 crc kubenswrapper[4775]: I0123 14:18:03.222339 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-wmglj" Jan 23 14:18:03 crc kubenswrapper[4775]: I0123 14:18:03.247770 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-p7nxk" podStartSLOduration=6.30682747 podStartE2EDuration="11.247746882s" podCreationTimestamp="2026-01-23 14:17:52 +0000 UTC" firstStartedPulling="2026-01-23 14:17:53.403065006 +0000 UTC m=+820.397893756" lastFinishedPulling="2026-01-23 14:17:58.343984428 +0000 UTC m=+825.338813168" observedRunningTime="2026-01-23 14:17:58.854522102 +0000 UTC m=+825.849350872" watchObservedRunningTime="2026-01-23 14:18:03.247746882 +0000 UTC m=+830.242575662" Jan 23 14:18:03 crc kubenswrapper[4775]: I0123 14:18:03.478309 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:18:03 crc kubenswrapper[4775]: I0123 14:18:03.478388 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:18:03 crc kubenswrapper[4775]: I0123 14:18:03.485890 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:18:03 crc kubenswrapper[4775]: I0123 14:18:03.872339 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7c865d7849-c7sv9" Jan 23 14:18:03 crc kubenswrapper[4775]: I0123 14:18:03.999333 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-fgb82"] Jan 23 14:18:09 crc kubenswrapper[4775]: I0123 14:18:09.553706 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vdg25" Jan 23 14:18:13 crc kubenswrapper[4775]: I0123 14:18:13.758370 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-rnbff" Jan 23 14:18:28 crc kubenswrapper[4775]: I0123 14:18:28.352460 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f"] Jan 23 14:18:28 crc kubenswrapper[4775]: I0123 14:18:28.356311 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f" Jan 23 14:18:28 crc kubenswrapper[4775]: I0123 14:18:28.360989 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 23 14:18:28 crc kubenswrapper[4775]: I0123 14:18:28.365905 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f"] Jan 23 14:18:28 crc kubenswrapper[4775]: I0123 14:18:28.448288 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb88k\" (UniqueName: \"kubernetes.io/projected/6f15de03-78a8-4158-8a06-0174d617e32b-kube-api-access-vb88k\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f\" (UID: \"6f15de03-78a8-4158-8a06-0174d617e32b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f" Jan 23 14:18:28 crc kubenswrapper[4775]: I0123 14:18:28.448356 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f15de03-78a8-4158-8a06-0174d617e32b-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f\" (UID: \"6f15de03-78a8-4158-8a06-0174d617e32b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f" Jan 23 14:18:28 crc kubenswrapper[4775]: I0123 14:18:28.448413 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f15de03-78a8-4158-8a06-0174d617e32b-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f\" (UID: \"6f15de03-78a8-4158-8a06-0174d617e32b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f" Jan 23 14:18:28 crc kubenswrapper[4775]: I0123 14:18:28.549347 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb88k\" (UniqueName: \"kubernetes.io/projected/6f15de03-78a8-4158-8a06-0174d617e32b-kube-api-access-vb88k\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f\" (UID: \"6f15de03-78a8-4158-8a06-0174d617e32b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f" Jan 23 14:18:28 crc kubenswrapper[4775]: I0123 14:18:28.549675 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f15de03-78a8-4158-8a06-0174d617e32b-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f\" (UID: \"6f15de03-78a8-4158-8a06-0174d617e32b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f" Jan 23 14:18:28 crc kubenswrapper[4775]: I0123 14:18:28.549730 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f15de03-78a8-4158-8a06-0174d617e32b-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f\" (UID: \"6f15de03-78a8-4158-8a06-0174d617e32b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f" Jan 23 14:18:28 crc kubenswrapper[4775]: I0123 14:18:28.550751 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f15de03-78a8-4158-8a06-0174d617e32b-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f\" (UID: \"6f15de03-78a8-4158-8a06-0174d617e32b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f" Jan 23 14:18:28 crc kubenswrapper[4775]: I0123 14:18:28.550843 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f15de03-78a8-4158-8a06-0174d617e32b-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f\" (UID: \"6f15de03-78a8-4158-8a06-0174d617e32b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f" Jan 23 14:18:28 crc kubenswrapper[4775]: I0123 14:18:28.586011 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb88k\" (UniqueName: \"kubernetes.io/projected/6f15de03-78a8-4158-8a06-0174d617e32b-kube-api-access-vb88k\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f\" (UID: \"6f15de03-78a8-4158-8a06-0174d617e32b\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f" Jan 23 14:18:28 crc kubenswrapper[4775]: I0123 14:18:28.713742 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f" Jan 23 14:18:28 crc kubenswrapper[4775]: I0123 14:18:28.993924 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f"] Jan 23 14:18:29 crc kubenswrapper[4775]: W0123 14:18:29.001340 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f15de03_78a8_4158_8a06_0174d617e32b.slice/crio-358e67ba750c945eb1905172d8ca362a184f647280970764596331751b3f85e7 WatchSource:0}: Error finding container 358e67ba750c945eb1905172d8ca362a184f647280970764596331751b3f85e7: Status 404 returned error can't find the container with id 358e67ba750c945eb1905172d8ca362a184f647280970764596331751b3f85e7 Jan 23 14:18:29 crc kubenswrapper[4775]: I0123 14:18:29.026389 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f" event={"ID":"6f15de03-78a8-4158-8a06-0174d617e32b","Type":"ContainerStarted","Data":"358e67ba750c945eb1905172d8ca362a184f647280970764596331751b3f85e7"} Jan 23 14:18:29 crc kubenswrapper[4775]: I0123 14:18:29.055369 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-fgb82" podUID="a6821f92-2d15-4dc0-92ed-7a30cef98db9" containerName="console" containerID="cri-o://f4aaa0765a07f4839c71e2b2a303a3c0c625cc8d1414133eff523c9a0838b442" gracePeriod=15 Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.012649 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-fgb82_a6821f92-2d15-4dc0-92ed-7a30cef98db9/console/0.log" Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.013192 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.032832 4775 generic.go:334] "Generic (PLEG): container finished" podID="6f15de03-78a8-4158-8a06-0174d617e32b" containerID="0a352d91c01c1461c69f722c13f874d98c91d123737945ce6c53a0c87e019e94" exitCode=0 Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.032899 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f" event={"ID":"6f15de03-78a8-4158-8a06-0174d617e32b","Type":"ContainerDied","Data":"0a352d91c01c1461c69f722c13f874d98c91d123737945ce6c53a0c87e019e94"} Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.034949 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-fgb82_a6821f92-2d15-4dc0-92ed-7a30cef98db9/console/0.log" Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.035054 4775 generic.go:334] "Generic (PLEG): container finished" podID="a6821f92-2d15-4dc0-92ed-7a30cef98db9" containerID="f4aaa0765a07f4839c71e2b2a303a3c0c625cc8d1414133eff523c9a0838b442" exitCode=2 Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.035110 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fgb82" event={"ID":"a6821f92-2d15-4dc0-92ed-7a30cef98db9","Type":"ContainerDied","Data":"f4aaa0765a07f4839c71e2b2a303a3c0c625cc8d1414133eff523c9a0838b442"} Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.035158 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-fgb82" event={"ID":"a6821f92-2d15-4dc0-92ed-7a30cef98db9","Type":"ContainerDied","Data":"ef54fd5e26cacb272f1e1be9cfe28c0c931df15d597bb7da81a47734c646362b"} Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.035129 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-fgb82" Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.035215 4775 scope.go:117] "RemoveContainer" containerID="f4aaa0765a07f4839c71e2b2a303a3c0c625cc8d1414133eff523c9a0838b442" Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.073653 4775 scope.go:117] "RemoveContainer" containerID="f4aaa0765a07f4839c71e2b2a303a3c0c625cc8d1414133eff523c9a0838b442" Jan 23 14:18:30 crc kubenswrapper[4775]: E0123 14:18:30.075062 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4aaa0765a07f4839c71e2b2a303a3c0c625cc8d1414133eff523c9a0838b442\": container with ID starting with f4aaa0765a07f4839c71e2b2a303a3c0c625cc8d1414133eff523c9a0838b442 not found: ID does not exist" containerID="f4aaa0765a07f4839c71e2b2a303a3c0c625cc8d1414133eff523c9a0838b442" Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.075103 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4aaa0765a07f4839c71e2b2a303a3c0c625cc8d1414133eff523c9a0838b442"} err="failed to get container status \"f4aaa0765a07f4839c71e2b2a303a3c0c625cc8d1414133eff523c9a0838b442\": rpc error: code = NotFound desc = could not find container \"f4aaa0765a07f4839c71e2b2a303a3c0c625cc8d1414133eff523c9a0838b442\": container with ID starting with f4aaa0765a07f4839c71e2b2a303a3c0c625cc8d1414133eff523c9a0838b442 not found: ID does not exist" Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.212425 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgvmt\" (UniqueName: \"kubernetes.io/projected/a6821f92-2d15-4dc0-92ed-7a30cef98db9-kube-api-access-tgvmt\") pod \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.212563 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-service-ca\") pod \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.212622 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-console-config\") pod \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.212761 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-oauth-serving-cert\") pod \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.212899 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a6821f92-2d15-4dc0-92ed-7a30cef98db9-console-oauth-config\") pod \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.212964 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a6821f92-2d15-4dc0-92ed-7a30cef98db9-console-serving-cert\") pod \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.213050 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-trusted-ca-bundle\") pod \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\" (UID: \"a6821f92-2d15-4dc0-92ed-7a30cef98db9\") " Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.213641 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-service-ca" (OuterVolumeSpecName: "service-ca") pod "a6821f92-2d15-4dc0-92ed-7a30cef98db9" (UID: "a6821f92-2d15-4dc0-92ed-7a30cef98db9"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.213732 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-console-config" (OuterVolumeSpecName: "console-config") pod "a6821f92-2d15-4dc0-92ed-7a30cef98db9" (UID: "a6821f92-2d15-4dc0-92ed-7a30cef98db9"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.214378 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "a6821f92-2d15-4dc0-92ed-7a30cef98db9" (UID: "a6821f92-2d15-4dc0-92ed-7a30cef98db9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.214411 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "a6821f92-2d15-4dc0-92ed-7a30cef98db9" (UID: "a6821f92-2d15-4dc0-92ed-7a30cef98db9"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.222661 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6821f92-2d15-4dc0-92ed-7a30cef98db9-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "a6821f92-2d15-4dc0-92ed-7a30cef98db9" (UID: "a6821f92-2d15-4dc0-92ed-7a30cef98db9"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.223052 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6821f92-2d15-4dc0-92ed-7a30cef98db9-kube-api-access-tgvmt" (OuterVolumeSpecName: "kube-api-access-tgvmt") pod "a6821f92-2d15-4dc0-92ed-7a30cef98db9" (UID: "a6821f92-2d15-4dc0-92ed-7a30cef98db9"). InnerVolumeSpecName "kube-api-access-tgvmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.227504 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6821f92-2d15-4dc0-92ed-7a30cef98db9-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "a6821f92-2d15-4dc0-92ed-7a30cef98db9" (UID: "a6821f92-2d15-4dc0-92ed-7a30cef98db9"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.314517 4775 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.314573 4775 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a6821f92-2d15-4dc0-92ed-7a30cef98db9-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.314594 4775 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a6821f92-2d15-4dc0-92ed-7a30cef98db9-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.314611 4775 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.314628 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgvmt\" (UniqueName: \"kubernetes.io/projected/a6821f92-2d15-4dc0-92ed-7a30cef98db9-kube-api-access-tgvmt\") on node \"crc\" DevicePath \"\"" Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.314647 4775 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.314662 4775 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a6821f92-2d15-4dc0-92ed-7a30cef98db9-console-config\") on node \"crc\" DevicePath \"\"" Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.382792 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-fgb82"] Jan 23 14:18:30 crc kubenswrapper[4775]: I0123 14:18:30.389563 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-fgb82"] Jan 23 14:18:31 crc kubenswrapper[4775]: I0123 14:18:31.725632 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6821f92-2d15-4dc0-92ed-7a30cef98db9" path="/var/lib/kubelet/pods/a6821f92-2d15-4dc0-92ed-7a30cef98db9/volumes" Jan 23 14:18:32 crc kubenswrapper[4775]: I0123 14:18:32.053714 4775 generic.go:334] "Generic (PLEG): container finished" podID="6f15de03-78a8-4158-8a06-0174d617e32b" containerID="41c36445db7844bf1524cc6bf76ea62e882c8f64c3b04b1bf6092f57c54b3805" exitCode=0 Jan 23 14:18:32 crc kubenswrapper[4775]: I0123 14:18:32.053763 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f" event={"ID":"6f15de03-78a8-4158-8a06-0174d617e32b","Type":"ContainerDied","Data":"41c36445db7844bf1524cc6bf76ea62e882c8f64c3b04b1bf6092f57c54b3805"} Jan 23 14:18:33 crc kubenswrapper[4775]: I0123 14:18:33.066157 4775 generic.go:334] "Generic (PLEG): container finished" podID="6f15de03-78a8-4158-8a06-0174d617e32b" containerID="ce3c959548e46b225f00e83f12330d256bd0f985a80a48e96d43c7bc6cc4968a" exitCode=0 Jan 23 14:18:33 crc kubenswrapper[4775]: I0123 14:18:33.066260 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f" event={"ID":"6f15de03-78a8-4158-8a06-0174d617e32b","Type":"ContainerDied","Data":"ce3c959548e46b225f00e83f12330d256bd0f985a80a48e96d43c7bc6cc4968a"} Jan 23 14:18:34 crc kubenswrapper[4775]: I0123 14:18:34.348314 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f" Jan 23 14:18:34 crc kubenswrapper[4775]: I0123 14:18:34.470250 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vb88k\" (UniqueName: \"kubernetes.io/projected/6f15de03-78a8-4158-8a06-0174d617e32b-kube-api-access-vb88k\") pod \"6f15de03-78a8-4158-8a06-0174d617e32b\" (UID: \"6f15de03-78a8-4158-8a06-0174d617e32b\") " Jan 23 14:18:34 crc kubenswrapper[4775]: I0123 14:18:34.470316 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f15de03-78a8-4158-8a06-0174d617e32b-util\") pod \"6f15de03-78a8-4158-8a06-0174d617e32b\" (UID: \"6f15de03-78a8-4158-8a06-0174d617e32b\") " Jan 23 14:18:34 crc kubenswrapper[4775]: I0123 14:18:34.470349 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f15de03-78a8-4158-8a06-0174d617e32b-bundle\") pod \"6f15de03-78a8-4158-8a06-0174d617e32b\" (UID: \"6f15de03-78a8-4158-8a06-0174d617e32b\") " Jan 23 14:18:34 crc kubenswrapper[4775]: I0123 14:18:34.471226 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f15de03-78a8-4158-8a06-0174d617e32b-bundle" (OuterVolumeSpecName: "bundle") pod "6f15de03-78a8-4158-8a06-0174d617e32b" (UID: "6f15de03-78a8-4158-8a06-0174d617e32b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:18:34 crc kubenswrapper[4775]: I0123 14:18:34.478127 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f15de03-78a8-4158-8a06-0174d617e32b-kube-api-access-vb88k" (OuterVolumeSpecName: "kube-api-access-vb88k") pod "6f15de03-78a8-4158-8a06-0174d617e32b" (UID: "6f15de03-78a8-4158-8a06-0174d617e32b"). InnerVolumeSpecName "kube-api-access-vb88k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:18:34 crc kubenswrapper[4775]: I0123 14:18:34.497141 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6f15de03-78a8-4158-8a06-0174d617e32b-util" (OuterVolumeSpecName: "util") pod "6f15de03-78a8-4158-8a06-0174d617e32b" (UID: "6f15de03-78a8-4158-8a06-0174d617e32b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:18:34 crc kubenswrapper[4775]: I0123 14:18:34.571326 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vb88k\" (UniqueName: \"kubernetes.io/projected/6f15de03-78a8-4158-8a06-0174d617e32b-kube-api-access-vb88k\") on node \"crc\" DevicePath \"\"" Jan 23 14:18:34 crc kubenswrapper[4775]: I0123 14:18:34.571358 4775 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6f15de03-78a8-4158-8a06-0174d617e32b-util\") on node \"crc\" DevicePath \"\"" Jan 23 14:18:34 crc kubenswrapper[4775]: I0123 14:18:34.571367 4775 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6f15de03-78a8-4158-8a06-0174d617e32b-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:18:35 crc kubenswrapper[4775]: I0123 14:18:35.082962 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f" event={"ID":"6f15de03-78a8-4158-8a06-0174d617e32b","Type":"ContainerDied","Data":"358e67ba750c945eb1905172d8ca362a184f647280970764596331751b3f85e7"} Jan 23 14:18:35 crc kubenswrapper[4775]: I0123 14:18:35.083023 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="358e67ba750c945eb1905172d8ca362a184f647280970764596331751b3f85e7" Jan 23 14:18:35 crc kubenswrapper[4775]: I0123 14:18:35.083130 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.424243 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-558d9b5f8-fgs57"] Jan 23 14:18:43 crc kubenswrapper[4775]: E0123 14:18:43.425074 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f15de03-78a8-4158-8a06-0174d617e32b" containerName="pull" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.425090 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f15de03-78a8-4158-8a06-0174d617e32b" containerName="pull" Jan 23 14:18:43 crc kubenswrapper[4775]: E0123 14:18:43.425106 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f15de03-78a8-4158-8a06-0174d617e32b" containerName="extract" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.425114 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f15de03-78a8-4158-8a06-0174d617e32b" containerName="extract" Jan 23 14:18:43 crc kubenswrapper[4775]: E0123 14:18:43.425129 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f15de03-78a8-4158-8a06-0174d617e32b" containerName="util" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.425137 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f15de03-78a8-4158-8a06-0174d617e32b" containerName="util" Jan 23 14:18:43 crc kubenswrapper[4775]: E0123 14:18:43.425150 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6821f92-2d15-4dc0-92ed-7a30cef98db9" containerName="console" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.425158 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6821f92-2d15-4dc0-92ed-7a30cef98db9" containerName="console" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.425264 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6821f92-2d15-4dc0-92ed-7a30cef98db9" containerName="console" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.425279 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f15de03-78a8-4158-8a06-0174d617e32b" containerName="extract" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.425693 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-558d9b5f8-fgs57" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.428747 4775 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.428875 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.428931 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.429790 4775 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-bfk62" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.430368 4775 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.454553 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-558d9b5f8-fgs57"] Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.583961 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/838b952f-6d05-4955-82fd-9cf8a017c5b5-apiservice-cert\") pod \"metallb-operator-controller-manager-558d9b5f8-fgs57\" (UID: \"838b952f-6d05-4955-82fd-9cf8a017c5b5\") " pod="metallb-system/metallb-operator-controller-manager-558d9b5f8-fgs57" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.584028 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/838b952f-6d05-4955-82fd-9cf8a017c5b5-webhook-cert\") pod \"metallb-operator-controller-manager-558d9b5f8-fgs57\" (UID: \"838b952f-6d05-4955-82fd-9cf8a017c5b5\") " pod="metallb-system/metallb-operator-controller-manager-558d9b5f8-fgs57" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.584050 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvbz8\" (UniqueName: \"kubernetes.io/projected/838b952f-6d05-4955-82fd-9cf8a017c5b5-kube-api-access-tvbz8\") pod \"metallb-operator-controller-manager-558d9b5f8-fgs57\" (UID: \"838b952f-6d05-4955-82fd-9cf8a017c5b5\") " pod="metallb-system/metallb-operator-controller-manager-558d9b5f8-fgs57" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.660793 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-699f5544f9-66nkz"] Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.661397 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-699f5544f9-66nkz" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.662674 4775 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-dj9rr" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.664088 4775 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.666703 4775 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.678544 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-699f5544f9-66nkz"] Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.685203 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/838b952f-6d05-4955-82fd-9cf8a017c5b5-apiservice-cert\") pod \"metallb-operator-controller-manager-558d9b5f8-fgs57\" (UID: \"838b952f-6d05-4955-82fd-9cf8a017c5b5\") " pod="metallb-system/metallb-operator-controller-manager-558d9b5f8-fgs57" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.685255 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/838b952f-6d05-4955-82fd-9cf8a017c5b5-webhook-cert\") pod \"metallb-operator-controller-manager-558d9b5f8-fgs57\" (UID: \"838b952f-6d05-4955-82fd-9cf8a017c5b5\") " pod="metallb-system/metallb-operator-controller-manager-558d9b5f8-fgs57" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.685279 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvbz8\" (UniqueName: \"kubernetes.io/projected/838b952f-6d05-4955-82fd-9cf8a017c5b5-kube-api-access-tvbz8\") pod \"metallb-operator-controller-manager-558d9b5f8-fgs57\" (UID: \"838b952f-6d05-4955-82fd-9cf8a017c5b5\") " pod="metallb-system/metallb-operator-controller-manager-558d9b5f8-fgs57" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.690494 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/838b952f-6d05-4955-82fd-9cf8a017c5b5-webhook-cert\") pod \"metallb-operator-controller-manager-558d9b5f8-fgs57\" (UID: \"838b952f-6d05-4955-82fd-9cf8a017c5b5\") " pod="metallb-system/metallb-operator-controller-manager-558d9b5f8-fgs57" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.690973 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/838b952f-6d05-4955-82fd-9cf8a017c5b5-apiservice-cert\") pod \"metallb-operator-controller-manager-558d9b5f8-fgs57\" (UID: \"838b952f-6d05-4955-82fd-9cf8a017c5b5\") " pod="metallb-system/metallb-operator-controller-manager-558d9b5f8-fgs57" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.704585 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvbz8\" (UniqueName: \"kubernetes.io/projected/838b952f-6d05-4955-82fd-9cf8a017c5b5-kube-api-access-tvbz8\") pod \"metallb-operator-controller-manager-558d9b5f8-fgs57\" (UID: \"838b952f-6d05-4955-82fd-9cf8a017c5b5\") " pod="metallb-system/metallb-operator-controller-manager-558d9b5f8-fgs57" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.743263 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-558d9b5f8-fgs57" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.786545 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fa6cceac-c1d4-4e7c-9e60-4dd698abc182-apiservice-cert\") pod \"metallb-operator-webhook-server-699f5544f9-66nkz\" (UID: \"fa6cceac-c1d4-4e7c-9e60-4dd698abc182\") " pod="metallb-system/metallb-operator-webhook-server-699f5544f9-66nkz" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.786859 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fa6cceac-c1d4-4e7c-9e60-4dd698abc182-webhook-cert\") pod \"metallb-operator-webhook-server-699f5544f9-66nkz\" (UID: \"fa6cceac-c1d4-4e7c-9e60-4dd698abc182\") " pod="metallb-system/metallb-operator-webhook-server-699f5544f9-66nkz" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.786878 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wj4h\" (UniqueName: \"kubernetes.io/projected/fa6cceac-c1d4-4e7c-9e60-4dd698abc182-kube-api-access-6wj4h\") pod \"metallb-operator-webhook-server-699f5544f9-66nkz\" (UID: \"fa6cceac-c1d4-4e7c-9e60-4dd698abc182\") " pod="metallb-system/metallb-operator-webhook-server-699f5544f9-66nkz" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.891507 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fa6cceac-c1d4-4e7c-9e60-4dd698abc182-webhook-cert\") pod \"metallb-operator-webhook-server-699f5544f9-66nkz\" (UID: \"fa6cceac-c1d4-4e7c-9e60-4dd698abc182\") " pod="metallb-system/metallb-operator-webhook-server-699f5544f9-66nkz" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.891753 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wj4h\" (UniqueName: \"kubernetes.io/projected/fa6cceac-c1d4-4e7c-9e60-4dd698abc182-kube-api-access-6wj4h\") pod \"metallb-operator-webhook-server-699f5544f9-66nkz\" (UID: \"fa6cceac-c1d4-4e7c-9e60-4dd698abc182\") " pod="metallb-system/metallb-operator-webhook-server-699f5544f9-66nkz" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.891913 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fa6cceac-c1d4-4e7c-9e60-4dd698abc182-apiservice-cert\") pod \"metallb-operator-webhook-server-699f5544f9-66nkz\" (UID: \"fa6cceac-c1d4-4e7c-9e60-4dd698abc182\") " pod="metallb-system/metallb-operator-webhook-server-699f5544f9-66nkz" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.900423 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fa6cceac-c1d4-4e7c-9e60-4dd698abc182-webhook-cert\") pod \"metallb-operator-webhook-server-699f5544f9-66nkz\" (UID: \"fa6cceac-c1d4-4e7c-9e60-4dd698abc182\") " pod="metallb-system/metallb-operator-webhook-server-699f5544f9-66nkz" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.908105 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fa6cceac-c1d4-4e7c-9e60-4dd698abc182-apiservice-cert\") pod \"metallb-operator-webhook-server-699f5544f9-66nkz\" (UID: \"fa6cceac-c1d4-4e7c-9e60-4dd698abc182\") " pod="metallb-system/metallb-operator-webhook-server-699f5544f9-66nkz" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.911531 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wj4h\" (UniqueName: \"kubernetes.io/projected/fa6cceac-c1d4-4e7c-9e60-4dd698abc182-kube-api-access-6wj4h\") pod \"metallb-operator-webhook-server-699f5544f9-66nkz\" (UID: \"fa6cceac-c1d4-4e7c-9e60-4dd698abc182\") " pod="metallb-system/metallb-operator-webhook-server-699f5544f9-66nkz" Jan 23 14:18:43 crc kubenswrapper[4775]: I0123 14:18:43.973400 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-699f5544f9-66nkz" Jan 23 14:18:44 crc kubenswrapper[4775]: I0123 14:18:44.150943 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-699f5544f9-66nkz"] Jan 23 14:18:44 crc kubenswrapper[4775]: I0123 14:18:44.213261 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-558d9b5f8-fgs57"] Jan 23 14:18:45 crc kubenswrapper[4775]: I0123 14:18:45.142468 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-558d9b5f8-fgs57" event={"ID":"838b952f-6d05-4955-82fd-9cf8a017c5b5","Type":"ContainerStarted","Data":"31d26ecf0593c7bd8ee008f0c003dd240a6c9e29631f2280aeaa76f92e9519eb"} Jan 23 14:18:45 crc kubenswrapper[4775]: I0123 14:18:45.144010 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-699f5544f9-66nkz" event={"ID":"fa6cceac-c1d4-4e7c-9e60-4dd698abc182","Type":"ContainerStarted","Data":"8e61c6aed48d6e9729931df5cccc8e2c99bac20ba2cc4ed55f76afdd1451bc55"} Jan 23 14:18:48 crc kubenswrapper[4775]: I0123 14:18:48.162520 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-558d9b5f8-fgs57" event={"ID":"838b952f-6d05-4955-82fd-9cf8a017c5b5","Type":"ContainerStarted","Data":"b1e1368cee8aa55ec36a56c7081530eab3e8dc7106c24939b70dfd6fc64fdf88"} Jan 23 14:18:48 crc kubenswrapper[4775]: I0123 14:18:48.163232 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-558d9b5f8-fgs57" Jan 23 14:18:48 crc kubenswrapper[4775]: I0123 14:18:48.187776 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-558d9b5f8-fgs57" podStartSLOduration=1.928141788 podStartE2EDuration="5.187754879s" podCreationTimestamp="2026-01-23 14:18:43 +0000 UTC" firstStartedPulling="2026-01-23 14:18:44.220649398 +0000 UTC m=+871.215478148" lastFinishedPulling="2026-01-23 14:18:47.480262499 +0000 UTC m=+874.475091239" observedRunningTime="2026-01-23 14:18:48.181916818 +0000 UTC m=+875.176745598" watchObservedRunningTime="2026-01-23 14:18:48.187754879 +0000 UTC m=+875.182583639" Jan 23 14:18:50 crc kubenswrapper[4775]: I0123 14:18:50.175887 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-699f5544f9-66nkz" event={"ID":"fa6cceac-c1d4-4e7c-9e60-4dd698abc182","Type":"ContainerStarted","Data":"1f2020d5f4443d4280788cd115936a0c4526ce925c109f4db0f17392eeff8c07"} Jan 23 14:18:50 crc kubenswrapper[4775]: I0123 14:18:50.177952 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-699f5544f9-66nkz" Jan 23 14:18:50 crc kubenswrapper[4775]: I0123 14:18:50.207765 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-699f5544f9-66nkz" podStartSLOduration=2.039282314 podStartE2EDuration="7.207734757s" podCreationTimestamp="2026-01-23 14:18:43 +0000 UTC" firstStartedPulling="2026-01-23 14:18:44.164241855 +0000 UTC m=+871.159070595" lastFinishedPulling="2026-01-23 14:18:49.332694288 +0000 UTC m=+876.327523038" observedRunningTime="2026-01-23 14:18:50.204338388 +0000 UTC m=+877.199167168" watchObservedRunningTime="2026-01-23 14:18:50.207734757 +0000 UTC m=+877.202563537" Jan 23 14:18:53 crc kubenswrapper[4775]: I0123 14:18:53.219073 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:18:53 crc kubenswrapper[4775]: I0123 14:18:53.219178 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:19:03 crc kubenswrapper[4775]: I0123 14:19:03.980054 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-699f5544f9-66nkz" Jan 23 14:19:23 crc kubenswrapper[4775]: I0123 14:19:23.219006 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:19:23 crc kubenswrapper[4775]: I0123 14:19:23.221507 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:19:23 crc kubenswrapper[4775]: I0123 14:19:23.747912 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-558d9b5f8-fgs57" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.571900 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-p49hv"] Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.572697 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p49hv" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.575145 4775 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.580242 4775 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-wxcj6" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.582664 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-pv6fp"] Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.585426 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.590905 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.592174 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-p49hv"] Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.595658 4775 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.658971 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq5gz\" (UniqueName: \"kubernetes.io/projected/6831fcdc-628b-4bef-bf9c-5e24b63f9196-kube-api-access-sq5gz\") pod \"frr-k8s-pv6fp\" (UID: \"6831fcdc-628b-4bef-bf9c-5e24b63f9196\") " pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.659015 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6831fcdc-628b-4bef-bf9c-5e24b63f9196-reloader\") pod \"frr-k8s-pv6fp\" (UID: \"6831fcdc-628b-4bef-bf9c-5e24b63f9196\") " pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.659036 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6831fcdc-628b-4bef-bf9c-5e24b63f9196-metrics-certs\") pod \"frr-k8s-pv6fp\" (UID: \"6831fcdc-628b-4bef-bf9c-5e24b63f9196\") " pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.659149 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6831fcdc-628b-4bef-bf9c-5e24b63f9196-frr-conf\") pod \"frr-k8s-pv6fp\" (UID: \"6831fcdc-628b-4bef-bf9c-5e24b63f9196\") " pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.659228 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6831fcdc-628b-4bef-bf9c-5e24b63f9196-frr-startup\") pod \"frr-k8s-pv6fp\" (UID: \"6831fcdc-628b-4bef-bf9c-5e24b63f9196\") " pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.659272 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6831fcdc-628b-4bef-bf9c-5e24b63f9196-metrics\") pod \"frr-k8s-pv6fp\" (UID: \"6831fcdc-628b-4bef-bf9c-5e24b63f9196\") " pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.659361 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hglwq\" (UniqueName: \"kubernetes.io/projected/9eb8e4c8-06ce-427a-9b91-7b77d4e8a783-kube-api-access-hglwq\") pod \"frr-k8s-webhook-server-7df86c4f6c-p49hv\" (UID: \"9eb8e4c8-06ce-427a-9b91-7b77d4e8a783\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p49hv" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.659389 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6831fcdc-628b-4bef-bf9c-5e24b63f9196-frr-sockets\") pod \"frr-k8s-pv6fp\" (UID: \"6831fcdc-628b-4bef-bf9c-5e24b63f9196\") " pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.659412 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9eb8e4c8-06ce-427a-9b91-7b77d4e8a783-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-p49hv\" (UID: \"9eb8e4c8-06ce-427a-9b91-7b77d4e8a783\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p49hv" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.685470 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-x4gxj"] Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.686283 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-x4gxj" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.689727 4775 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.690673 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.690743 4775 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-bdfk9" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.696152 4775 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.716229 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-7qz58"] Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.717119 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-7qz58" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.718354 4775 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.743507 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-7qz58"] Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.760857 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7755c0c4-4e11-47c6-955d-453408fd4316-cert\") pod \"controller-6968d8fdc4-7qz58\" (UID: \"7755c0c4-4e11-47c6-955d-453408fd4316\") " pod="metallb-system/controller-6968d8fdc4-7qz58" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.760926 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hglwq\" (UniqueName: \"kubernetes.io/projected/9eb8e4c8-06ce-427a-9b91-7b77d4e8a783-kube-api-access-hglwq\") pod \"frr-k8s-webhook-server-7df86c4f6c-p49hv\" (UID: \"9eb8e4c8-06ce-427a-9b91-7b77d4e8a783\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p49hv" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.760960 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6831fcdc-628b-4bef-bf9c-5e24b63f9196-frr-sockets\") pod \"frr-k8s-pv6fp\" (UID: \"6831fcdc-628b-4bef-bf9c-5e24b63f9196\") " pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.760980 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9eb8e4c8-06ce-427a-9b91-7b77d4e8a783-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-p49hv\" (UID: \"9eb8e4c8-06ce-427a-9b91-7b77d4e8a783\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p49hv" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.761002 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9334cd3c-2410-4fbd-8cc1-14edca3afb92-memberlist\") pod \"speaker-x4gxj\" (UID: \"9334cd3c-2410-4fbd-8cc1-14edca3afb92\") " pod="metallb-system/speaker-x4gxj" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.761023 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq5gz\" (UniqueName: \"kubernetes.io/projected/6831fcdc-628b-4bef-bf9c-5e24b63f9196-kube-api-access-sq5gz\") pod \"frr-k8s-pv6fp\" (UID: \"6831fcdc-628b-4bef-bf9c-5e24b63f9196\") " pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.761045 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6831fcdc-628b-4bef-bf9c-5e24b63f9196-reloader\") pod \"frr-k8s-pv6fp\" (UID: \"6831fcdc-628b-4bef-bf9c-5e24b63f9196\") " pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.761059 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/9334cd3c-2410-4fbd-8cc1-14edca3afb92-metallb-excludel2\") pod \"speaker-x4gxj\" (UID: \"9334cd3c-2410-4fbd-8cc1-14edca3afb92\") " pod="metallb-system/speaker-x4gxj" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.761085 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6831fcdc-628b-4bef-bf9c-5e24b63f9196-metrics-certs\") pod \"frr-k8s-pv6fp\" (UID: \"6831fcdc-628b-4bef-bf9c-5e24b63f9196\") " pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.761115 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7755c0c4-4e11-47c6-955d-453408fd4316-metrics-certs\") pod \"controller-6968d8fdc4-7qz58\" (UID: \"7755c0c4-4e11-47c6-955d-453408fd4316\") " pod="metallb-system/controller-6968d8fdc4-7qz58" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.761131 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh2jd\" (UniqueName: \"kubernetes.io/projected/7755c0c4-4e11-47c6-955d-453408fd4316-kube-api-access-jh2jd\") pod \"controller-6968d8fdc4-7qz58\" (UID: \"7755c0c4-4e11-47c6-955d-453408fd4316\") " pod="metallb-system/controller-6968d8fdc4-7qz58" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.761147 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6831fcdc-628b-4bef-bf9c-5e24b63f9196-frr-conf\") pod \"frr-k8s-pv6fp\" (UID: \"6831fcdc-628b-4bef-bf9c-5e24b63f9196\") " pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.761169 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnnds\" (UniqueName: \"kubernetes.io/projected/9334cd3c-2410-4fbd-8cc1-14edca3afb92-kube-api-access-mnnds\") pod \"speaker-x4gxj\" (UID: \"9334cd3c-2410-4fbd-8cc1-14edca3afb92\") " pod="metallb-system/speaker-x4gxj" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.761186 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9334cd3c-2410-4fbd-8cc1-14edca3afb92-metrics-certs\") pod \"speaker-x4gxj\" (UID: \"9334cd3c-2410-4fbd-8cc1-14edca3afb92\") " pod="metallb-system/speaker-x4gxj" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.761213 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6831fcdc-628b-4bef-bf9c-5e24b63f9196-frr-startup\") pod \"frr-k8s-pv6fp\" (UID: \"6831fcdc-628b-4bef-bf9c-5e24b63f9196\") " pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.761238 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6831fcdc-628b-4bef-bf9c-5e24b63f9196-metrics\") pod \"frr-k8s-pv6fp\" (UID: \"6831fcdc-628b-4bef-bf9c-5e24b63f9196\") " pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:24 crc kubenswrapper[4775]: E0123 14:19:24.761694 4775 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 23 14:19:24 crc kubenswrapper[4775]: E0123 14:19:24.761746 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9eb8e4c8-06ce-427a-9b91-7b77d4e8a783-cert podName:9eb8e4c8-06ce-427a-9b91-7b77d4e8a783 nodeName:}" failed. No retries permitted until 2026-01-23 14:19:25.261730386 +0000 UTC m=+912.256559126 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/9eb8e4c8-06ce-427a-9b91-7b77d4e8a783-cert") pod "frr-k8s-webhook-server-7df86c4f6c-p49hv" (UID: "9eb8e4c8-06ce-427a-9b91-7b77d4e8a783") : secret "frr-k8s-webhook-server-cert" not found Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.761787 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6831fcdc-628b-4bef-bf9c-5e24b63f9196-frr-sockets\") pod \"frr-k8s-pv6fp\" (UID: \"6831fcdc-628b-4bef-bf9c-5e24b63f9196\") " pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.761873 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6831fcdc-628b-4bef-bf9c-5e24b63f9196-metrics\") pod \"frr-k8s-pv6fp\" (UID: \"6831fcdc-628b-4bef-bf9c-5e24b63f9196\") " pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.762078 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6831fcdc-628b-4bef-bf9c-5e24b63f9196-frr-conf\") pod \"frr-k8s-pv6fp\" (UID: \"6831fcdc-628b-4bef-bf9c-5e24b63f9196\") " pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.765471 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6831fcdc-628b-4bef-bf9c-5e24b63f9196-reloader\") pod \"frr-k8s-pv6fp\" (UID: \"6831fcdc-628b-4bef-bf9c-5e24b63f9196\") " pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.766567 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6831fcdc-628b-4bef-bf9c-5e24b63f9196-frr-startup\") pod \"frr-k8s-pv6fp\" (UID: \"6831fcdc-628b-4bef-bf9c-5e24b63f9196\") " pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.772365 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6831fcdc-628b-4bef-bf9c-5e24b63f9196-metrics-certs\") pod \"frr-k8s-pv6fp\" (UID: \"6831fcdc-628b-4bef-bf9c-5e24b63f9196\") " pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.785284 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hglwq\" (UniqueName: \"kubernetes.io/projected/9eb8e4c8-06ce-427a-9b91-7b77d4e8a783-kube-api-access-hglwq\") pod \"frr-k8s-webhook-server-7df86c4f6c-p49hv\" (UID: \"9eb8e4c8-06ce-427a-9b91-7b77d4e8a783\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p49hv" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.786233 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq5gz\" (UniqueName: \"kubernetes.io/projected/6831fcdc-628b-4bef-bf9c-5e24b63f9196-kube-api-access-sq5gz\") pod \"frr-k8s-pv6fp\" (UID: \"6831fcdc-628b-4bef-bf9c-5e24b63f9196\") " pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.862918 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7755c0c4-4e11-47c6-955d-453408fd4316-cert\") pod \"controller-6968d8fdc4-7qz58\" (UID: \"7755c0c4-4e11-47c6-955d-453408fd4316\") " pod="metallb-system/controller-6968d8fdc4-7qz58" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.863036 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9334cd3c-2410-4fbd-8cc1-14edca3afb92-memberlist\") pod \"speaker-x4gxj\" (UID: \"9334cd3c-2410-4fbd-8cc1-14edca3afb92\") " pod="metallb-system/speaker-x4gxj" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.863062 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/9334cd3c-2410-4fbd-8cc1-14edca3afb92-metallb-excludel2\") pod \"speaker-x4gxj\" (UID: \"9334cd3c-2410-4fbd-8cc1-14edca3afb92\") " pod="metallb-system/speaker-x4gxj" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.863086 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7755c0c4-4e11-47c6-955d-453408fd4316-metrics-certs\") pod \"controller-6968d8fdc4-7qz58\" (UID: \"7755c0c4-4e11-47c6-955d-453408fd4316\") " pod="metallb-system/controller-6968d8fdc4-7qz58" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.863104 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh2jd\" (UniqueName: \"kubernetes.io/projected/7755c0c4-4e11-47c6-955d-453408fd4316-kube-api-access-jh2jd\") pod \"controller-6968d8fdc4-7qz58\" (UID: \"7755c0c4-4e11-47c6-955d-453408fd4316\") " pod="metallb-system/controller-6968d8fdc4-7qz58" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.863127 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnnds\" (UniqueName: \"kubernetes.io/projected/9334cd3c-2410-4fbd-8cc1-14edca3afb92-kube-api-access-mnnds\") pod \"speaker-x4gxj\" (UID: \"9334cd3c-2410-4fbd-8cc1-14edca3afb92\") " pod="metallb-system/speaker-x4gxj" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.863142 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9334cd3c-2410-4fbd-8cc1-14edca3afb92-metrics-certs\") pod \"speaker-x4gxj\" (UID: \"9334cd3c-2410-4fbd-8cc1-14edca3afb92\") " pod="metallb-system/speaker-x4gxj" Jan 23 14:19:24 crc kubenswrapper[4775]: E0123 14:19:24.863261 4775 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 23 14:19:24 crc kubenswrapper[4775]: E0123 14:19:24.863313 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9334cd3c-2410-4fbd-8cc1-14edca3afb92-metrics-certs podName:9334cd3c-2410-4fbd-8cc1-14edca3afb92 nodeName:}" failed. No retries permitted until 2026-01-23 14:19:25.363297453 +0000 UTC m=+912.358126193 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9334cd3c-2410-4fbd-8cc1-14edca3afb92-metrics-certs") pod "speaker-x4gxj" (UID: "9334cd3c-2410-4fbd-8cc1-14edca3afb92") : secret "speaker-certs-secret" not found Jan 23 14:19:24 crc kubenswrapper[4775]: E0123 14:19:24.863475 4775 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 23 14:19:24 crc kubenswrapper[4775]: E0123 14:19:24.863539 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7755c0c4-4e11-47c6-955d-453408fd4316-metrics-certs podName:7755c0c4-4e11-47c6-955d-453408fd4316 nodeName:}" failed. No retries permitted until 2026-01-23 14:19:25.363522309 +0000 UTC m=+912.358351049 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7755c0c4-4e11-47c6-955d-453408fd4316-metrics-certs") pod "controller-6968d8fdc4-7qz58" (UID: "7755c0c4-4e11-47c6-955d-453408fd4316") : secret "controller-certs-secret" not found Jan 23 14:19:24 crc kubenswrapper[4775]: E0123 14:19:24.863609 4775 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 23 14:19:24 crc kubenswrapper[4775]: E0123 14:19:24.863651 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9334cd3c-2410-4fbd-8cc1-14edca3afb92-memberlist podName:9334cd3c-2410-4fbd-8cc1-14edca3afb92 nodeName:}" failed. No retries permitted until 2026-01-23 14:19:25.363640993 +0000 UTC m=+912.358469833 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/9334cd3c-2410-4fbd-8cc1-14edca3afb92-memberlist") pod "speaker-x4gxj" (UID: "9334cd3c-2410-4fbd-8cc1-14edca3afb92") : secret "metallb-memberlist" not found Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.863973 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/9334cd3c-2410-4fbd-8cc1-14edca3afb92-metallb-excludel2\") pod \"speaker-x4gxj\" (UID: \"9334cd3c-2410-4fbd-8cc1-14edca3afb92\") " pod="metallb-system/speaker-x4gxj" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.864149 4775 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.881260 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7755c0c4-4e11-47c6-955d-453408fd4316-cert\") pod \"controller-6968d8fdc4-7qz58\" (UID: \"7755c0c4-4e11-47c6-955d-453408fd4316\") " pod="metallb-system/controller-6968d8fdc4-7qz58" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.883931 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnnds\" (UniqueName: \"kubernetes.io/projected/9334cd3c-2410-4fbd-8cc1-14edca3afb92-kube-api-access-mnnds\") pod \"speaker-x4gxj\" (UID: \"9334cd3c-2410-4fbd-8cc1-14edca3afb92\") " pod="metallb-system/speaker-x4gxj" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.886627 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh2jd\" (UniqueName: \"kubernetes.io/projected/7755c0c4-4e11-47c6-955d-453408fd4316-kube-api-access-jh2jd\") pod \"controller-6968d8fdc4-7qz58\" (UID: \"7755c0c4-4e11-47c6-955d-453408fd4316\") " pod="metallb-system/controller-6968d8fdc4-7qz58" Jan 23 14:19:24 crc kubenswrapper[4775]: I0123 14:19:24.900923 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:25 crc kubenswrapper[4775]: I0123 14:19:25.268739 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9eb8e4c8-06ce-427a-9b91-7b77d4e8a783-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-p49hv\" (UID: \"9eb8e4c8-06ce-427a-9b91-7b77d4e8a783\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p49hv" Jan 23 14:19:25 crc kubenswrapper[4775]: I0123 14:19:25.273989 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9eb8e4c8-06ce-427a-9b91-7b77d4e8a783-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-p49hv\" (UID: \"9eb8e4c8-06ce-427a-9b91-7b77d4e8a783\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p49hv" Jan 23 14:19:25 crc kubenswrapper[4775]: I0123 14:19:25.370496 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9334cd3c-2410-4fbd-8cc1-14edca3afb92-memberlist\") pod \"speaker-x4gxj\" (UID: \"9334cd3c-2410-4fbd-8cc1-14edca3afb92\") " pod="metallb-system/speaker-x4gxj" Jan 23 14:19:25 crc kubenswrapper[4775]: I0123 14:19:25.370569 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7755c0c4-4e11-47c6-955d-453408fd4316-metrics-certs\") pod \"controller-6968d8fdc4-7qz58\" (UID: \"7755c0c4-4e11-47c6-955d-453408fd4316\") " pod="metallb-system/controller-6968d8fdc4-7qz58" Jan 23 14:19:25 crc kubenswrapper[4775]: I0123 14:19:25.370611 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9334cd3c-2410-4fbd-8cc1-14edca3afb92-metrics-certs\") pod \"speaker-x4gxj\" (UID: \"9334cd3c-2410-4fbd-8cc1-14edca3afb92\") " pod="metallb-system/speaker-x4gxj" Jan 23 14:19:25 crc kubenswrapper[4775]: E0123 14:19:25.371056 4775 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 23 14:19:25 crc kubenswrapper[4775]: E0123 14:19:25.371223 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9334cd3c-2410-4fbd-8cc1-14edca3afb92-memberlist podName:9334cd3c-2410-4fbd-8cc1-14edca3afb92 nodeName:}" failed. No retries permitted until 2026-01-23 14:19:26.37119872 +0000 UTC m=+913.366027480 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/9334cd3c-2410-4fbd-8cc1-14edca3afb92-memberlist") pod "speaker-x4gxj" (UID: "9334cd3c-2410-4fbd-8cc1-14edca3afb92") : secret "metallb-memberlist" not found Jan 23 14:19:25 crc kubenswrapper[4775]: I0123 14:19:25.373837 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9334cd3c-2410-4fbd-8cc1-14edca3afb92-metrics-certs\") pod \"speaker-x4gxj\" (UID: \"9334cd3c-2410-4fbd-8cc1-14edca3afb92\") " pod="metallb-system/speaker-x4gxj" Jan 23 14:19:25 crc kubenswrapper[4775]: I0123 14:19:25.377258 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7755c0c4-4e11-47c6-955d-453408fd4316-metrics-certs\") pod \"controller-6968d8fdc4-7qz58\" (UID: \"7755c0c4-4e11-47c6-955d-453408fd4316\") " pod="metallb-system/controller-6968d8fdc4-7qz58" Jan 23 14:19:25 crc kubenswrapper[4775]: I0123 14:19:25.419172 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pv6fp" event={"ID":"6831fcdc-628b-4bef-bf9c-5e24b63f9196","Type":"ContainerStarted","Data":"db1a01bc1ba1ee42d7e50bc0d9c3a1c450dee6ca84d4fbebd85aad6d42b30298"} Jan 23 14:19:25 crc kubenswrapper[4775]: I0123 14:19:25.488677 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p49hv" Jan 23 14:19:25 crc kubenswrapper[4775]: I0123 14:19:25.629445 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-7qz58" Jan 23 14:19:25 crc kubenswrapper[4775]: I0123 14:19:25.787891 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-p49hv"] Jan 23 14:19:25 crc kubenswrapper[4775]: W0123 14:19:25.807188 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9eb8e4c8_06ce_427a_9b91_7b77d4e8a783.slice/crio-43b2dd981de197f98bd2f99512806f493f10245ed8b65dd0901972e979d84574 WatchSource:0}: Error finding container 43b2dd981de197f98bd2f99512806f493f10245ed8b65dd0901972e979d84574: Status 404 returned error can't find the container with id 43b2dd981de197f98bd2f99512806f493f10245ed8b65dd0901972e979d84574 Jan 23 14:19:25 crc kubenswrapper[4775]: I0123 14:19:25.871779 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-7qz58"] Jan 23 14:19:25 crc kubenswrapper[4775]: W0123 14:19:25.877093 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7755c0c4_4e11_47c6_955d_453408fd4316.slice/crio-17fd6d08c3dd7a38078b74180c299c580896aef821aca33ac211e3a8b3b3f794 WatchSource:0}: Error finding container 17fd6d08c3dd7a38078b74180c299c580896aef821aca33ac211e3a8b3b3f794: Status 404 returned error can't find the container with id 17fd6d08c3dd7a38078b74180c299c580896aef821aca33ac211e3a8b3b3f794 Jan 23 14:19:26 crc kubenswrapper[4775]: I0123 14:19:26.388466 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9334cd3c-2410-4fbd-8cc1-14edca3afb92-memberlist\") pod \"speaker-x4gxj\" (UID: \"9334cd3c-2410-4fbd-8cc1-14edca3afb92\") " pod="metallb-system/speaker-x4gxj" Jan 23 14:19:26 crc kubenswrapper[4775]: I0123 14:19:26.397036 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9334cd3c-2410-4fbd-8cc1-14edca3afb92-memberlist\") pod \"speaker-x4gxj\" (UID: \"9334cd3c-2410-4fbd-8cc1-14edca3afb92\") " pod="metallb-system/speaker-x4gxj" Jan 23 14:19:26 crc kubenswrapper[4775]: I0123 14:19:26.430584 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-7qz58" event={"ID":"7755c0c4-4e11-47c6-955d-453408fd4316","Type":"ContainerStarted","Data":"57f2c829861f1d8e95295d15874ad6927f022e9f4978d657a631c20112805825"} Jan 23 14:19:26 crc kubenswrapper[4775]: I0123 14:19:26.430923 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-7qz58" Jan 23 14:19:26 crc kubenswrapper[4775]: I0123 14:19:26.431054 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-7qz58" event={"ID":"7755c0c4-4e11-47c6-955d-453408fd4316","Type":"ContainerStarted","Data":"5ba1d06968107c6c4878cb34ef755a33804579ee7891a376f65a795c0ac3484b"} Jan 23 14:19:26 crc kubenswrapper[4775]: I0123 14:19:26.431187 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-7qz58" event={"ID":"7755c0c4-4e11-47c6-955d-453408fd4316","Type":"ContainerStarted","Data":"17fd6d08c3dd7a38078b74180c299c580896aef821aca33ac211e3a8b3b3f794"} Jan 23 14:19:26 crc kubenswrapper[4775]: I0123 14:19:26.432442 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p49hv" event={"ID":"9eb8e4c8-06ce-427a-9b91-7b77d4e8a783","Type":"ContainerStarted","Data":"43b2dd981de197f98bd2f99512806f493f10245ed8b65dd0901972e979d84574"} Jan 23 14:19:26 crc kubenswrapper[4775]: I0123 14:19:26.462004 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-7qz58" podStartSLOduration=2.461977933 podStartE2EDuration="2.461977933s" podCreationTimestamp="2026-01-23 14:19:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:19:26.460000785 +0000 UTC m=+913.454829595" watchObservedRunningTime="2026-01-23 14:19:26.461977933 +0000 UTC m=+913.456806713" Jan 23 14:19:26 crc kubenswrapper[4775]: I0123 14:19:26.498511 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-x4gxj" Jan 23 14:19:26 crc kubenswrapper[4775]: W0123 14:19:26.525110 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9334cd3c_2410_4fbd_8cc1_14edca3afb92.slice/crio-ec1d4be9b1f4f98a826241a8b746c5b4889316b6be8ae3f647b850bda393b57e WatchSource:0}: Error finding container ec1d4be9b1f4f98a826241a8b746c5b4889316b6be8ae3f647b850bda393b57e: Status 404 returned error can't find the container with id ec1d4be9b1f4f98a826241a8b746c5b4889316b6be8ae3f647b850bda393b57e Jan 23 14:19:27 crc kubenswrapper[4775]: I0123 14:19:27.448791 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-x4gxj" event={"ID":"9334cd3c-2410-4fbd-8cc1-14edca3afb92","Type":"ContainerStarted","Data":"156441b27aaaded54dcafdd02e2c5c5e6b18f47eede2e0d388a99d3496420beb"} Jan 23 14:19:27 crc kubenswrapper[4775]: I0123 14:19:27.449047 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-x4gxj" event={"ID":"9334cd3c-2410-4fbd-8cc1-14edca3afb92","Type":"ContainerStarted","Data":"c7ab84ab93277513795d64aa01d22abe32a2e419638db5331534b03973fc7c0b"} Jan 23 14:19:27 crc kubenswrapper[4775]: I0123 14:19:27.449057 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-x4gxj" event={"ID":"9334cd3c-2410-4fbd-8cc1-14edca3afb92","Type":"ContainerStarted","Data":"ec1d4be9b1f4f98a826241a8b746c5b4889316b6be8ae3f647b850bda393b57e"} Jan 23 14:19:27 crc kubenswrapper[4775]: I0123 14:19:27.449519 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-x4gxj" Jan 23 14:19:27 crc kubenswrapper[4775]: I0123 14:19:27.466495 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-x4gxj" podStartSLOduration=3.466472231 podStartE2EDuration="3.466472231s" podCreationTimestamp="2026-01-23 14:19:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:19:27.466004988 +0000 UTC m=+914.460833738" watchObservedRunningTime="2026-01-23 14:19:27.466472231 +0000 UTC m=+914.461300961" Jan 23 14:19:33 crc kubenswrapper[4775]: I0123 14:19:33.498103 4775 generic.go:334] "Generic (PLEG): container finished" podID="6831fcdc-628b-4bef-bf9c-5e24b63f9196" containerID="813a6d56c6670b6d99b6c9f72e927be05faf833d5b670b06bdeeb14e982e2169" exitCode=0 Jan 23 14:19:33 crc kubenswrapper[4775]: I0123 14:19:33.498943 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pv6fp" event={"ID":"6831fcdc-628b-4bef-bf9c-5e24b63f9196","Type":"ContainerDied","Data":"813a6d56c6670b6d99b6c9f72e927be05faf833d5b670b06bdeeb14e982e2169"} Jan 23 14:19:33 crc kubenswrapper[4775]: I0123 14:19:33.503289 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p49hv" event={"ID":"9eb8e4c8-06ce-427a-9b91-7b77d4e8a783","Type":"ContainerStarted","Data":"aa9ece7b4d5c7e9fc0f0e34c63e6fd4fe03536eca6e6c15c90afa524847a9383"} Jan 23 14:19:33 crc kubenswrapper[4775]: I0123 14:19:33.504367 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p49hv" Jan 23 14:19:33 crc kubenswrapper[4775]: I0123 14:19:33.583859 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p49hv" podStartSLOduration=2.487079212 podStartE2EDuration="9.583832015s" podCreationTimestamp="2026-01-23 14:19:24 +0000 UTC" firstStartedPulling="2026-01-23 14:19:25.809739506 +0000 UTC m=+912.804568266" lastFinishedPulling="2026-01-23 14:19:32.906492289 +0000 UTC m=+919.901321069" observedRunningTime="2026-01-23 14:19:33.572549537 +0000 UTC m=+920.567378317" watchObservedRunningTime="2026-01-23 14:19:33.583832015 +0000 UTC m=+920.578660795" Jan 23 14:19:34 crc kubenswrapper[4775]: I0123 14:19:34.514588 4775 generic.go:334] "Generic (PLEG): container finished" podID="6831fcdc-628b-4bef-bf9c-5e24b63f9196" containerID="b200717ccdf0687d46117014ac949cddbb385b39b3e47b3e49905f39327299d3" exitCode=0 Jan 23 14:19:34 crc kubenswrapper[4775]: I0123 14:19:34.514708 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pv6fp" event={"ID":"6831fcdc-628b-4bef-bf9c-5e24b63f9196","Type":"ContainerDied","Data":"b200717ccdf0687d46117014ac949cddbb385b39b3e47b3e49905f39327299d3"} Jan 23 14:19:35 crc kubenswrapper[4775]: I0123 14:19:35.526481 4775 generic.go:334] "Generic (PLEG): container finished" podID="6831fcdc-628b-4bef-bf9c-5e24b63f9196" containerID="8103a0949ce2e5c463436b105a88b8197cdfa3462ff1260aa4072482bb0bdc6b" exitCode=0 Jan 23 14:19:35 crc kubenswrapper[4775]: I0123 14:19:35.526584 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pv6fp" event={"ID":"6831fcdc-628b-4bef-bf9c-5e24b63f9196","Type":"ContainerDied","Data":"8103a0949ce2e5c463436b105a88b8197cdfa3462ff1260aa4072482bb0bdc6b"} Jan 23 14:19:36 crc kubenswrapper[4775]: I0123 14:19:36.502223 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-x4gxj" Jan 23 14:19:36 crc kubenswrapper[4775]: I0123 14:19:36.535543 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pv6fp" event={"ID":"6831fcdc-628b-4bef-bf9c-5e24b63f9196","Type":"ContainerStarted","Data":"36a591050aa0b9f1202328585cc6e296ce83c49824afb9b8a1292713799beeec"} Jan 23 14:19:36 crc kubenswrapper[4775]: I0123 14:19:36.536304 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pv6fp" event={"ID":"6831fcdc-628b-4bef-bf9c-5e24b63f9196","Type":"ContainerStarted","Data":"2c82b926383a0c0d90388e4d5ab7b3327886e8351cea0e7edf651e99569d1ab6"} Jan 23 14:19:36 crc kubenswrapper[4775]: I0123 14:19:36.536417 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pv6fp" event={"ID":"6831fcdc-628b-4bef-bf9c-5e24b63f9196","Type":"ContainerStarted","Data":"41fbaebc39be36ee52c47d74e2888a476cfdf76962771d24db0cbbe01ff807bf"} Jan 23 14:19:36 crc kubenswrapper[4775]: I0123 14:19:36.536479 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pv6fp" event={"ID":"6831fcdc-628b-4bef-bf9c-5e24b63f9196","Type":"ContainerStarted","Data":"0f2727b0ee883208bba3dd5c587cde2751e71a8c5b72e97c0667a5a0905da7db"} Jan 23 14:19:36 crc kubenswrapper[4775]: I0123 14:19:36.536541 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pv6fp" event={"ID":"6831fcdc-628b-4bef-bf9c-5e24b63f9196","Type":"ContainerStarted","Data":"40956cc256fd2b5b2ded2fe0ed28aceb74acb9a74f12117920e4aadd8f45b915"} Jan 23 14:19:37 crc kubenswrapper[4775]: I0123 14:19:37.548051 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pv6fp" event={"ID":"6831fcdc-628b-4bef-bf9c-5e24b63f9196","Type":"ContainerStarted","Data":"bf0c0d463faa2515d69243d5412f283f99fc1a34983ea09c208e0a76629d7c7e"} Jan 23 14:19:37 crc kubenswrapper[4775]: I0123 14:19:37.548500 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:37 crc kubenswrapper[4775]: I0123 14:19:37.586838 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-pv6fp" podStartSLOduration=5.761049517 podStartE2EDuration="13.586772632s" podCreationTimestamp="2026-01-23 14:19:24 +0000 UTC" firstStartedPulling="2026-01-23 14:19:25.046621783 +0000 UTC m=+912.041450533" lastFinishedPulling="2026-01-23 14:19:32.872344898 +0000 UTC m=+919.867173648" observedRunningTime="2026-01-23 14:19:37.582543829 +0000 UTC m=+924.577372629" watchObservedRunningTime="2026-01-23 14:19:37.586772632 +0000 UTC m=+924.581601402" Jan 23 14:19:38 crc kubenswrapper[4775]: I0123 14:19:38.097502 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j"] Jan 23 14:19:38 crc kubenswrapper[4775]: I0123 14:19:38.099761 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j" Jan 23 14:19:38 crc kubenswrapper[4775]: I0123 14:19:38.102303 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 23 14:19:38 crc kubenswrapper[4775]: I0123 14:19:38.144098 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j"] Jan 23 14:19:38 crc kubenswrapper[4775]: I0123 14:19:38.167934 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/44d1d9d6-a01e-49cc-8066-15c9954fda32-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j\" (UID: \"44d1d9d6-a01e-49cc-8066-15c9954fda32\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j" Jan 23 14:19:38 crc kubenswrapper[4775]: I0123 14:19:38.168137 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/44d1d9d6-a01e-49cc-8066-15c9954fda32-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j\" (UID: \"44d1d9d6-a01e-49cc-8066-15c9954fda32\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j" Jan 23 14:19:38 crc kubenswrapper[4775]: I0123 14:19:38.168252 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqxxc\" (UniqueName: \"kubernetes.io/projected/44d1d9d6-a01e-49cc-8066-15c9954fda32-kube-api-access-fqxxc\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j\" (UID: \"44d1d9d6-a01e-49cc-8066-15c9954fda32\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j" Jan 23 14:19:38 crc kubenswrapper[4775]: I0123 14:19:38.269574 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/44d1d9d6-a01e-49cc-8066-15c9954fda32-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j\" (UID: \"44d1d9d6-a01e-49cc-8066-15c9954fda32\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j" Jan 23 14:19:38 crc kubenswrapper[4775]: I0123 14:19:38.269639 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/44d1d9d6-a01e-49cc-8066-15c9954fda32-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j\" (UID: \"44d1d9d6-a01e-49cc-8066-15c9954fda32\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j" Jan 23 14:19:38 crc kubenswrapper[4775]: I0123 14:19:38.269669 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqxxc\" (UniqueName: \"kubernetes.io/projected/44d1d9d6-a01e-49cc-8066-15c9954fda32-kube-api-access-fqxxc\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j\" (UID: \"44d1d9d6-a01e-49cc-8066-15c9954fda32\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j" Jan 23 14:19:38 crc kubenswrapper[4775]: I0123 14:19:38.270742 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/44d1d9d6-a01e-49cc-8066-15c9954fda32-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j\" (UID: \"44d1d9d6-a01e-49cc-8066-15c9954fda32\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j" Jan 23 14:19:38 crc kubenswrapper[4775]: I0123 14:19:38.270833 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/44d1d9d6-a01e-49cc-8066-15c9954fda32-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j\" (UID: \"44d1d9d6-a01e-49cc-8066-15c9954fda32\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j" Jan 23 14:19:38 crc kubenswrapper[4775]: I0123 14:19:38.308176 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqxxc\" (UniqueName: \"kubernetes.io/projected/44d1d9d6-a01e-49cc-8066-15c9954fda32-kube-api-access-fqxxc\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j\" (UID: \"44d1d9d6-a01e-49cc-8066-15c9954fda32\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j" Jan 23 14:19:38 crc kubenswrapper[4775]: I0123 14:19:38.426769 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j" Jan 23 14:19:38 crc kubenswrapper[4775]: I0123 14:19:38.724812 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j"] Jan 23 14:19:39 crc kubenswrapper[4775]: I0123 14:19:39.566917 4775 generic.go:334] "Generic (PLEG): container finished" podID="44d1d9d6-a01e-49cc-8066-15c9954fda32" containerID="a937d438a1a4270f6d3d40caa3a143bd0e86460a1e35413e1f358c0140018f34" exitCode=0 Jan 23 14:19:39 crc kubenswrapper[4775]: I0123 14:19:39.566979 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j" event={"ID":"44d1d9d6-a01e-49cc-8066-15c9954fda32","Type":"ContainerDied","Data":"a937d438a1a4270f6d3d40caa3a143bd0e86460a1e35413e1f358c0140018f34"} Jan 23 14:19:39 crc kubenswrapper[4775]: I0123 14:19:39.567056 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j" event={"ID":"44d1d9d6-a01e-49cc-8066-15c9954fda32","Type":"ContainerStarted","Data":"4a74468451db63e620eca8183b66f307dbb5ffe1fcc040bb9f3f188b51856c1a"} Jan 23 14:19:39 crc kubenswrapper[4775]: I0123 14:19:39.901298 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:39 crc kubenswrapper[4775]: I0123 14:19:39.955225 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:42 crc kubenswrapper[4775]: I0123 14:19:42.439877 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qngpp"] Jan 23 14:19:42 crc kubenswrapper[4775]: I0123 14:19:42.441228 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qngpp" Jan 23 14:19:42 crc kubenswrapper[4775]: I0123 14:19:42.450505 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qngpp"] Jan 23 14:19:42 crc kubenswrapper[4775]: I0123 14:19:42.561548 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cszrm\" (UniqueName: \"kubernetes.io/projected/0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63-kube-api-access-cszrm\") pod \"certified-operators-qngpp\" (UID: \"0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63\") " pod="openshift-marketplace/certified-operators-qngpp" Jan 23 14:19:42 crc kubenswrapper[4775]: I0123 14:19:42.561921 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63-catalog-content\") pod \"certified-operators-qngpp\" (UID: \"0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63\") " pod="openshift-marketplace/certified-operators-qngpp" Jan 23 14:19:42 crc kubenswrapper[4775]: I0123 14:19:42.561951 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63-utilities\") pod \"certified-operators-qngpp\" (UID: \"0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63\") " pod="openshift-marketplace/certified-operators-qngpp" Jan 23 14:19:42 crc kubenswrapper[4775]: I0123 14:19:42.663035 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cszrm\" (UniqueName: \"kubernetes.io/projected/0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63-kube-api-access-cszrm\") pod \"certified-operators-qngpp\" (UID: \"0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63\") " pod="openshift-marketplace/certified-operators-qngpp" Jan 23 14:19:42 crc kubenswrapper[4775]: I0123 14:19:42.663132 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63-catalog-content\") pod \"certified-operators-qngpp\" (UID: \"0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63\") " pod="openshift-marketplace/certified-operators-qngpp" Jan 23 14:19:42 crc kubenswrapper[4775]: I0123 14:19:42.663159 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63-utilities\") pod \"certified-operators-qngpp\" (UID: \"0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63\") " pod="openshift-marketplace/certified-operators-qngpp" Jan 23 14:19:42 crc kubenswrapper[4775]: I0123 14:19:42.663658 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63-utilities\") pod \"certified-operators-qngpp\" (UID: \"0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63\") " pod="openshift-marketplace/certified-operators-qngpp" Jan 23 14:19:42 crc kubenswrapper[4775]: I0123 14:19:42.664588 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63-catalog-content\") pod \"certified-operators-qngpp\" (UID: \"0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63\") " pod="openshift-marketplace/certified-operators-qngpp" Jan 23 14:19:42 crc kubenswrapper[4775]: I0123 14:19:42.684746 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cszrm\" (UniqueName: \"kubernetes.io/projected/0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63-kube-api-access-cszrm\") pod \"certified-operators-qngpp\" (UID: \"0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63\") " pod="openshift-marketplace/certified-operators-qngpp" Jan 23 14:19:42 crc kubenswrapper[4775]: I0123 14:19:42.777268 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qngpp" Jan 23 14:19:43 crc kubenswrapper[4775]: I0123 14:19:43.746167 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qngpp"] Jan 23 14:19:44 crc kubenswrapper[4775]: I0123 14:19:44.440984 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-28swh"] Jan 23 14:19:44 crc kubenswrapper[4775]: I0123 14:19:44.442406 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-28swh" Jan 23 14:19:44 crc kubenswrapper[4775]: I0123 14:19:44.454964 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-28swh"] Jan 23 14:19:44 crc kubenswrapper[4775]: I0123 14:19:44.487963 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl9q6\" (UniqueName: \"kubernetes.io/projected/10fc232f-aecc-4d2b-9dd2-48723f0a0cd6-kube-api-access-cl9q6\") pod \"community-operators-28swh\" (UID: \"10fc232f-aecc-4d2b-9dd2-48723f0a0cd6\") " pod="openshift-marketplace/community-operators-28swh" Jan 23 14:19:44 crc kubenswrapper[4775]: I0123 14:19:44.488092 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10fc232f-aecc-4d2b-9dd2-48723f0a0cd6-utilities\") pod \"community-operators-28swh\" (UID: \"10fc232f-aecc-4d2b-9dd2-48723f0a0cd6\") " pod="openshift-marketplace/community-operators-28swh" Jan 23 14:19:44 crc kubenswrapper[4775]: I0123 14:19:44.488207 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10fc232f-aecc-4d2b-9dd2-48723f0a0cd6-catalog-content\") pod \"community-operators-28swh\" (UID: \"10fc232f-aecc-4d2b-9dd2-48723f0a0cd6\") " pod="openshift-marketplace/community-operators-28swh" Jan 23 14:19:44 crc kubenswrapper[4775]: I0123 14:19:44.588656 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl9q6\" (UniqueName: \"kubernetes.io/projected/10fc232f-aecc-4d2b-9dd2-48723f0a0cd6-kube-api-access-cl9q6\") pod \"community-operators-28swh\" (UID: \"10fc232f-aecc-4d2b-9dd2-48723f0a0cd6\") " pod="openshift-marketplace/community-operators-28swh" Jan 23 14:19:44 crc kubenswrapper[4775]: I0123 14:19:44.588733 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10fc232f-aecc-4d2b-9dd2-48723f0a0cd6-utilities\") pod \"community-operators-28swh\" (UID: \"10fc232f-aecc-4d2b-9dd2-48723f0a0cd6\") " pod="openshift-marketplace/community-operators-28swh" Jan 23 14:19:44 crc kubenswrapper[4775]: I0123 14:19:44.588785 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10fc232f-aecc-4d2b-9dd2-48723f0a0cd6-catalog-content\") pod \"community-operators-28swh\" (UID: \"10fc232f-aecc-4d2b-9dd2-48723f0a0cd6\") " pod="openshift-marketplace/community-operators-28swh" Jan 23 14:19:44 crc kubenswrapper[4775]: I0123 14:19:44.589377 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10fc232f-aecc-4d2b-9dd2-48723f0a0cd6-catalog-content\") pod \"community-operators-28swh\" (UID: \"10fc232f-aecc-4d2b-9dd2-48723f0a0cd6\") " pod="openshift-marketplace/community-operators-28swh" Jan 23 14:19:44 crc kubenswrapper[4775]: I0123 14:19:44.589618 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10fc232f-aecc-4d2b-9dd2-48723f0a0cd6-utilities\") pod \"community-operators-28swh\" (UID: \"10fc232f-aecc-4d2b-9dd2-48723f0a0cd6\") " pod="openshift-marketplace/community-operators-28swh" Jan 23 14:19:44 crc kubenswrapper[4775]: I0123 14:19:44.599910 4775 generic.go:334] "Generic (PLEG): container finished" podID="0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63" containerID="554b1bc2f8959c11cce28ea694e21d41862ea18102ef0961167c2c12bb03ef3f" exitCode=0 Jan 23 14:19:44 crc kubenswrapper[4775]: I0123 14:19:44.599985 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qngpp" event={"ID":"0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63","Type":"ContainerDied","Data":"554b1bc2f8959c11cce28ea694e21d41862ea18102ef0961167c2c12bb03ef3f"} Jan 23 14:19:44 crc kubenswrapper[4775]: I0123 14:19:44.600016 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qngpp" event={"ID":"0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63","Type":"ContainerStarted","Data":"6d52c73638747afa5394f7ec7461317e60f2cd383c74573847591a6d789baafc"} Jan 23 14:19:44 crc kubenswrapper[4775]: I0123 14:19:44.602522 4775 generic.go:334] "Generic (PLEG): container finished" podID="44d1d9d6-a01e-49cc-8066-15c9954fda32" containerID="b56c53dc9ee2a924fe03668f427ab41d5339a983cc8e557939a0d1ed0c78ddc5" exitCode=0 Jan 23 14:19:44 crc kubenswrapper[4775]: I0123 14:19:44.602552 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j" event={"ID":"44d1d9d6-a01e-49cc-8066-15c9954fda32","Type":"ContainerDied","Data":"b56c53dc9ee2a924fe03668f427ab41d5339a983cc8e557939a0d1ed0c78ddc5"} Jan 23 14:19:44 crc kubenswrapper[4775]: I0123 14:19:44.622930 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl9q6\" (UniqueName: \"kubernetes.io/projected/10fc232f-aecc-4d2b-9dd2-48723f0a0cd6-kube-api-access-cl9q6\") pod \"community-operators-28swh\" (UID: \"10fc232f-aecc-4d2b-9dd2-48723f0a0cd6\") " pod="openshift-marketplace/community-operators-28swh" Jan 23 14:19:44 crc kubenswrapper[4775]: I0123 14:19:44.812413 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-28swh" Jan 23 14:19:45 crc kubenswrapper[4775]: I0123 14:19:45.303722 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-28swh"] Jan 23 14:19:45 crc kubenswrapper[4775]: W0123 14:19:45.308620 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10fc232f_aecc_4d2b_9dd2_48723f0a0cd6.slice/crio-633fbf5393f98d989746185197ee37b28ecb217d8912287bc04eb2dc32f94dd0 WatchSource:0}: Error finding container 633fbf5393f98d989746185197ee37b28ecb217d8912287bc04eb2dc32f94dd0: Status 404 returned error can't find the container with id 633fbf5393f98d989746185197ee37b28ecb217d8912287bc04eb2dc32f94dd0 Jan 23 14:19:45 crc kubenswrapper[4775]: I0123 14:19:45.497717 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-p49hv" Jan 23 14:19:45 crc kubenswrapper[4775]: I0123 14:19:45.608880 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-28swh" event={"ID":"10fc232f-aecc-4d2b-9dd2-48723f0a0cd6","Type":"ContainerDied","Data":"734653dab9d52bff0f3497315e73dde164639ca66bbedaae913a7a71ae66a1e6"} Jan 23 14:19:45 crc kubenswrapper[4775]: I0123 14:19:45.609343 4775 generic.go:334] "Generic (PLEG): container finished" podID="10fc232f-aecc-4d2b-9dd2-48723f0a0cd6" containerID="734653dab9d52bff0f3497315e73dde164639ca66bbedaae913a7a71ae66a1e6" exitCode=0 Jan 23 14:19:45 crc kubenswrapper[4775]: I0123 14:19:45.609591 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-28swh" event={"ID":"10fc232f-aecc-4d2b-9dd2-48723f0a0cd6","Type":"ContainerStarted","Data":"633fbf5393f98d989746185197ee37b28ecb217d8912287bc04eb2dc32f94dd0"} Jan 23 14:19:45 crc kubenswrapper[4775]: I0123 14:19:45.612887 4775 generic.go:334] "Generic (PLEG): container finished" podID="44d1d9d6-a01e-49cc-8066-15c9954fda32" containerID="cdb92bb89e05f5403a9d650767375bffbbaa6c149b86380481f9447fb457144b" exitCode=0 Jan 23 14:19:45 crc kubenswrapper[4775]: I0123 14:19:45.612969 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j" event={"ID":"44d1d9d6-a01e-49cc-8066-15c9954fda32","Type":"ContainerDied","Data":"cdb92bb89e05f5403a9d650767375bffbbaa6c149b86380481f9447fb457144b"} Jan 23 14:19:45 crc kubenswrapper[4775]: I0123 14:19:45.633604 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-7qz58" Jan 23 14:19:46 crc kubenswrapper[4775]: I0123 14:19:46.618867 4775 generic.go:334] "Generic (PLEG): container finished" podID="0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63" containerID="8fca8e5c22c6133f1f833c005890db3331499f65f7676fdfff1c29e1f3758837" exitCode=0 Jan 23 14:19:46 crc kubenswrapper[4775]: I0123 14:19:46.618969 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qngpp" event={"ID":"0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63","Type":"ContainerDied","Data":"8fca8e5c22c6133f1f833c005890db3331499f65f7676fdfff1c29e1f3758837"} Jan 23 14:19:46 crc kubenswrapper[4775]: I0123 14:19:46.936822 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j" Jan 23 14:19:47 crc kubenswrapper[4775]: I0123 14:19:47.119173 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqxxc\" (UniqueName: \"kubernetes.io/projected/44d1d9d6-a01e-49cc-8066-15c9954fda32-kube-api-access-fqxxc\") pod \"44d1d9d6-a01e-49cc-8066-15c9954fda32\" (UID: \"44d1d9d6-a01e-49cc-8066-15c9954fda32\") " Jan 23 14:19:47 crc kubenswrapper[4775]: I0123 14:19:47.119321 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/44d1d9d6-a01e-49cc-8066-15c9954fda32-util\") pod \"44d1d9d6-a01e-49cc-8066-15c9954fda32\" (UID: \"44d1d9d6-a01e-49cc-8066-15c9954fda32\") " Jan 23 14:19:47 crc kubenswrapper[4775]: I0123 14:19:47.119424 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/44d1d9d6-a01e-49cc-8066-15c9954fda32-bundle\") pod \"44d1d9d6-a01e-49cc-8066-15c9954fda32\" (UID: \"44d1d9d6-a01e-49cc-8066-15c9954fda32\") " Jan 23 14:19:47 crc kubenswrapper[4775]: I0123 14:19:47.120898 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44d1d9d6-a01e-49cc-8066-15c9954fda32-bundle" (OuterVolumeSpecName: "bundle") pod "44d1d9d6-a01e-49cc-8066-15c9954fda32" (UID: "44d1d9d6-a01e-49cc-8066-15c9954fda32"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:19:47 crc kubenswrapper[4775]: I0123 14:19:47.132093 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44d1d9d6-a01e-49cc-8066-15c9954fda32-kube-api-access-fqxxc" (OuterVolumeSpecName: "kube-api-access-fqxxc") pod "44d1d9d6-a01e-49cc-8066-15c9954fda32" (UID: "44d1d9d6-a01e-49cc-8066-15c9954fda32"). InnerVolumeSpecName "kube-api-access-fqxxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:19:47 crc kubenswrapper[4775]: I0123 14:19:47.135190 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44d1d9d6-a01e-49cc-8066-15c9954fda32-util" (OuterVolumeSpecName: "util") pod "44d1d9d6-a01e-49cc-8066-15c9954fda32" (UID: "44d1d9d6-a01e-49cc-8066-15c9954fda32"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:19:47 crc kubenswrapper[4775]: I0123 14:19:47.222195 4775 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/44d1d9d6-a01e-49cc-8066-15c9954fda32-util\") on node \"crc\" DevicePath \"\"" Jan 23 14:19:47 crc kubenswrapper[4775]: I0123 14:19:47.222478 4775 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/44d1d9d6-a01e-49cc-8066-15c9954fda32-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:19:47 crc kubenswrapper[4775]: I0123 14:19:47.222491 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqxxc\" (UniqueName: \"kubernetes.io/projected/44d1d9d6-a01e-49cc-8066-15c9954fda32-kube-api-access-fqxxc\") on node \"crc\" DevicePath \"\"" Jan 23 14:19:47 crc kubenswrapper[4775]: I0123 14:19:47.627862 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j" event={"ID":"44d1d9d6-a01e-49cc-8066-15c9954fda32","Type":"ContainerDied","Data":"4a74468451db63e620eca8183b66f307dbb5ffe1fcc040bb9f3f188b51856c1a"} Jan 23 14:19:47 crc kubenswrapper[4775]: I0123 14:19:47.627904 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a74468451db63e620eca8183b66f307dbb5ffe1fcc040bb9f3f188b51856c1a" Jan 23 14:19:47 crc kubenswrapper[4775]: I0123 14:19:47.627990 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j" Jan 23 14:19:47 crc kubenswrapper[4775]: I0123 14:19:47.631623 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qngpp" event={"ID":"0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63","Type":"ContainerStarted","Data":"b55406f89f3ecb3c2b2573ae584fd05e9868c2d3e050b64a303b35fae7a85e4f"} Jan 23 14:19:47 crc kubenswrapper[4775]: I0123 14:19:47.634716 4775 generic.go:334] "Generic (PLEG): container finished" podID="10fc232f-aecc-4d2b-9dd2-48723f0a0cd6" containerID="b5b4183c4ad06b1c793fb4e19eb9cdd431330d6579e5c0ef66d97c8549fc3156" exitCode=0 Jan 23 14:19:47 crc kubenswrapper[4775]: I0123 14:19:47.634761 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-28swh" event={"ID":"10fc232f-aecc-4d2b-9dd2-48723f0a0cd6","Type":"ContainerDied","Data":"b5b4183c4ad06b1c793fb4e19eb9cdd431330d6579e5c0ef66d97c8549fc3156"} Jan 23 14:19:47 crc kubenswrapper[4775]: I0123 14:19:47.656725 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qngpp" podStartSLOduration=2.847736511 podStartE2EDuration="5.656705411s" podCreationTimestamp="2026-01-23 14:19:42 +0000 UTC" firstStartedPulling="2026-01-23 14:19:44.601857906 +0000 UTC m=+931.596686646" lastFinishedPulling="2026-01-23 14:19:47.410826796 +0000 UTC m=+934.405655546" observedRunningTime="2026-01-23 14:19:47.654764895 +0000 UTC m=+934.649593635" watchObservedRunningTime="2026-01-23 14:19:47.656705411 +0000 UTC m=+934.651534161" Jan 23 14:19:48 crc kubenswrapper[4775]: I0123 14:19:48.643060 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-28swh" event={"ID":"10fc232f-aecc-4d2b-9dd2-48723f0a0cd6","Type":"ContainerStarted","Data":"dde2078b2220981090dffac8d417342ba6d34ddd4114ab003180a42263594aaa"} Jan 23 14:19:48 crc kubenswrapper[4775]: I0123 14:19:48.677070 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-28swh" podStartSLOduration=2.237197011 podStartE2EDuration="4.6770544s" podCreationTimestamp="2026-01-23 14:19:44 +0000 UTC" firstStartedPulling="2026-01-23 14:19:45.609866837 +0000 UTC m=+932.604695577" lastFinishedPulling="2026-01-23 14:19:48.049724186 +0000 UTC m=+935.044552966" observedRunningTime="2026-01-23 14:19:48.67395411 +0000 UTC m=+935.668782860" watchObservedRunningTime="2026-01-23 14:19:48.6770544 +0000 UTC m=+935.671883140" Jan 23 14:19:51 crc kubenswrapper[4775]: I0123 14:19:51.444527 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nmjlg"] Jan 23 14:19:51 crc kubenswrapper[4775]: E0123 14:19:51.445042 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44d1d9d6-a01e-49cc-8066-15c9954fda32" containerName="extract" Jan 23 14:19:51 crc kubenswrapper[4775]: I0123 14:19:51.445056 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="44d1d9d6-a01e-49cc-8066-15c9954fda32" containerName="extract" Jan 23 14:19:51 crc kubenswrapper[4775]: E0123 14:19:51.445077 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44d1d9d6-a01e-49cc-8066-15c9954fda32" containerName="pull" Jan 23 14:19:51 crc kubenswrapper[4775]: I0123 14:19:51.445085 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="44d1d9d6-a01e-49cc-8066-15c9954fda32" containerName="pull" Jan 23 14:19:51 crc kubenswrapper[4775]: E0123 14:19:51.445104 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44d1d9d6-a01e-49cc-8066-15c9954fda32" containerName="util" Jan 23 14:19:51 crc kubenswrapper[4775]: I0123 14:19:51.445113 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="44d1d9d6-a01e-49cc-8066-15c9954fda32" containerName="util" Jan 23 14:19:51 crc kubenswrapper[4775]: I0123 14:19:51.445250 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="44d1d9d6-a01e-49cc-8066-15c9954fda32" containerName="extract" Jan 23 14:19:51 crc kubenswrapper[4775]: I0123 14:19:51.445723 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nmjlg" Jan 23 14:19:51 crc kubenswrapper[4775]: I0123 14:19:51.447658 4775 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-bmqvs" Jan 23 14:19:51 crc kubenswrapper[4775]: I0123 14:19:51.448256 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 23 14:19:51 crc kubenswrapper[4775]: I0123 14:19:51.454204 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 23 14:19:51 crc kubenswrapper[4775]: I0123 14:19:51.460781 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nmjlg"] Jan 23 14:19:51 crc kubenswrapper[4775]: I0123 14:19:51.487687 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftl6p\" (UniqueName: \"kubernetes.io/projected/665532a6-49a8-4928-b5e1-909ac58bf7e8-kube-api-access-ftl6p\") pod \"cert-manager-operator-controller-manager-64cf6dff88-nmjlg\" (UID: \"665532a6-49a8-4928-b5e1-909ac58bf7e8\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nmjlg" Jan 23 14:19:51 crc kubenswrapper[4775]: I0123 14:19:51.487735 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/665532a6-49a8-4928-b5e1-909ac58bf7e8-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-nmjlg\" (UID: \"665532a6-49a8-4928-b5e1-909ac58bf7e8\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nmjlg" Jan 23 14:19:51 crc kubenswrapper[4775]: I0123 14:19:51.589131 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftl6p\" (UniqueName: \"kubernetes.io/projected/665532a6-49a8-4928-b5e1-909ac58bf7e8-kube-api-access-ftl6p\") pod \"cert-manager-operator-controller-manager-64cf6dff88-nmjlg\" (UID: \"665532a6-49a8-4928-b5e1-909ac58bf7e8\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nmjlg" Jan 23 14:19:51 crc kubenswrapper[4775]: I0123 14:19:51.589192 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/665532a6-49a8-4928-b5e1-909ac58bf7e8-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-nmjlg\" (UID: \"665532a6-49a8-4928-b5e1-909ac58bf7e8\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nmjlg" Jan 23 14:19:51 crc kubenswrapper[4775]: I0123 14:19:51.589707 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/665532a6-49a8-4928-b5e1-909ac58bf7e8-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-nmjlg\" (UID: \"665532a6-49a8-4928-b5e1-909ac58bf7e8\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nmjlg" Jan 23 14:19:51 crc kubenswrapper[4775]: I0123 14:19:51.613857 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftl6p\" (UniqueName: \"kubernetes.io/projected/665532a6-49a8-4928-b5e1-909ac58bf7e8-kube-api-access-ftl6p\") pod \"cert-manager-operator-controller-manager-64cf6dff88-nmjlg\" (UID: \"665532a6-49a8-4928-b5e1-909ac58bf7e8\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nmjlg" Jan 23 14:19:51 crc kubenswrapper[4775]: I0123 14:19:51.759842 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nmjlg" Jan 23 14:19:52 crc kubenswrapper[4775]: I0123 14:19:52.013698 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nmjlg"] Jan 23 14:19:52 crc kubenswrapper[4775]: W0123 14:19:52.023741 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod665532a6_49a8_4928_b5e1_909ac58bf7e8.slice/crio-d09f9a109cdba1c9a180993078f63b4fb7ce6bf8a9e80bbe74e4cbcd291fdd59 WatchSource:0}: Error finding container d09f9a109cdba1c9a180993078f63b4fb7ce6bf8a9e80bbe74e4cbcd291fdd59: Status 404 returned error can't find the container with id d09f9a109cdba1c9a180993078f63b4fb7ce6bf8a9e80bbe74e4cbcd291fdd59 Jan 23 14:19:52 crc kubenswrapper[4775]: I0123 14:19:52.670253 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nmjlg" event={"ID":"665532a6-49a8-4928-b5e1-909ac58bf7e8","Type":"ContainerStarted","Data":"d09f9a109cdba1c9a180993078f63b4fb7ce6bf8a9e80bbe74e4cbcd291fdd59"} Jan 23 14:19:52 crc kubenswrapper[4775]: I0123 14:19:52.777877 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qngpp" Jan 23 14:19:52 crc kubenswrapper[4775]: I0123 14:19:52.778430 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qngpp" Jan 23 14:19:52 crc kubenswrapper[4775]: I0123 14:19:52.847955 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qngpp" Jan 23 14:19:53 crc kubenswrapper[4775]: I0123 14:19:53.219185 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:19:53 crc kubenswrapper[4775]: I0123 14:19:53.219259 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:19:53 crc kubenswrapper[4775]: I0123 14:19:53.219323 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:19:53 crc kubenswrapper[4775]: I0123 14:19:53.220230 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fa8fa956c376098d850acaf12f40cfec6f35655328fae4e2ad440d4fb20e4881"} pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 14:19:53 crc kubenswrapper[4775]: I0123 14:19:53.220330 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" containerID="cri-o://fa8fa956c376098d850acaf12f40cfec6f35655328fae4e2ad440d4fb20e4881" gracePeriod=600 Jan 23 14:19:53 crc kubenswrapper[4775]: I0123 14:19:53.680948 4775 generic.go:334] "Generic (PLEG): container finished" podID="4fea0767-0566-4214-855d-ed0373946271" containerID="fa8fa956c376098d850acaf12f40cfec6f35655328fae4e2ad440d4fb20e4881" exitCode=0 Jan 23 14:19:53 crc kubenswrapper[4775]: I0123 14:19:53.681067 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerDied","Data":"fa8fa956c376098d850acaf12f40cfec6f35655328fae4e2ad440d4fb20e4881"} Jan 23 14:19:53 crc kubenswrapper[4775]: I0123 14:19:53.681170 4775 scope.go:117] "RemoveContainer" containerID="815b4a32200fdfae17b328752ad92ad8ee14e4c70962ef6a5caef5715b1e0d13" Jan 23 14:19:53 crc kubenswrapper[4775]: I0123 14:19:53.738166 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qngpp" Jan 23 14:19:54 crc kubenswrapper[4775]: I0123 14:19:54.695076 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerStarted","Data":"04aeabd8c4a1cb3e5fe85b5d65d741e8a1d8f8a6f9824c7a0b310cfc24829df1"} Jan 23 14:19:54 crc kubenswrapper[4775]: I0123 14:19:54.813222 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-28swh" Jan 23 14:19:54 crc kubenswrapper[4775]: I0123 14:19:54.813495 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-28swh" Jan 23 14:19:54 crc kubenswrapper[4775]: I0123 14:19:54.865586 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-28swh" Jan 23 14:19:54 crc kubenswrapper[4775]: I0123 14:19:54.904139 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-pv6fp" Jan 23 14:19:55 crc kubenswrapper[4775]: I0123 14:19:55.781858 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-28swh" Jan 23 14:19:56 crc kubenswrapper[4775]: I0123 14:19:56.826914 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qngpp"] Jan 23 14:19:56 crc kubenswrapper[4775]: I0123 14:19:56.827116 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qngpp" podUID="0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63" containerName="registry-server" containerID="cri-o://b55406f89f3ecb3c2b2573ae584fd05e9868c2d3e050b64a303b35fae7a85e4f" gracePeriod=2 Jan 23 14:19:57 crc kubenswrapper[4775]: I0123 14:19:57.722884 4775 generic.go:334] "Generic (PLEG): container finished" podID="0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63" containerID="b55406f89f3ecb3c2b2573ae584fd05e9868c2d3e050b64a303b35fae7a85e4f" exitCode=0 Jan 23 14:19:57 crc kubenswrapper[4775]: I0123 14:19:57.723998 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qngpp" event={"ID":"0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63","Type":"ContainerDied","Data":"b55406f89f3ecb3c2b2573ae584fd05e9868c2d3e050b64a303b35fae7a85e4f"} Jan 23 14:19:58 crc kubenswrapper[4775]: I0123 14:19:58.435717 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-28swh"] Jan 23 14:19:58 crc kubenswrapper[4775]: I0123 14:19:58.730441 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-28swh" podUID="10fc232f-aecc-4d2b-9dd2-48723f0a0cd6" containerName="registry-server" containerID="cri-o://dde2078b2220981090dffac8d417342ba6d34ddd4114ab003180a42263594aaa" gracePeriod=2 Jan 23 14:19:59 crc kubenswrapper[4775]: I0123 14:19:59.661496 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qngpp" Jan 23 14:19:59 crc kubenswrapper[4775]: I0123 14:19:59.726926 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cszrm\" (UniqueName: \"kubernetes.io/projected/0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63-kube-api-access-cszrm\") pod \"0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63\" (UID: \"0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63\") " Jan 23 14:19:59 crc kubenswrapper[4775]: I0123 14:19:59.726995 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63-utilities\") pod \"0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63\" (UID: \"0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63\") " Jan 23 14:19:59 crc kubenswrapper[4775]: I0123 14:19:59.727039 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63-catalog-content\") pod \"0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63\" (UID: \"0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63\") " Jan 23 14:19:59 crc kubenswrapper[4775]: I0123 14:19:59.727924 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63-utilities" (OuterVolumeSpecName: "utilities") pod "0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63" (UID: "0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:19:59 crc kubenswrapper[4775]: I0123 14:19:59.733161 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63-kube-api-access-cszrm" (OuterVolumeSpecName: "kube-api-access-cszrm") pod "0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63" (UID: "0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63"). InnerVolumeSpecName "kube-api-access-cszrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:19:59 crc kubenswrapper[4775]: I0123 14:19:59.739990 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qngpp" Jan 23 14:19:59 crc kubenswrapper[4775]: I0123 14:19:59.742126 4775 generic.go:334] "Generic (PLEG): container finished" podID="10fc232f-aecc-4d2b-9dd2-48723f0a0cd6" containerID="dde2078b2220981090dffac8d417342ba6d34ddd4114ab003180a42263594aaa" exitCode=0 Jan 23 14:19:59 crc kubenswrapper[4775]: I0123 14:19:59.764945 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qngpp" event={"ID":"0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63","Type":"ContainerDied","Data":"6d52c73638747afa5394f7ec7461317e60f2cd383c74573847591a6d789baafc"} Jan 23 14:19:59 crc kubenswrapper[4775]: I0123 14:19:59.765005 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-28swh" event={"ID":"10fc232f-aecc-4d2b-9dd2-48723f0a0cd6","Type":"ContainerDied","Data":"dde2078b2220981090dffac8d417342ba6d34ddd4114ab003180a42263594aaa"} Jan 23 14:19:59 crc kubenswrapper[4775]: I0123 14:19:59.765026 4775 scope.go:117] "RemoveContainer" containerID="b55406f89f3ecb3c2b2573ae584fd05e9868c2d3e050b64a303b35fae7a85e4f" Jan 23 14:19:59 crc kubenswrapper[4775]: I0123 14:19:59.780050 4775 scope.go:117] "RemoveContainer" containerID="8fca8e5c22c6133f1f833c005890db3331499f65f7676fdfff1c29e1f3758837" Jan 23 14:19:59 crc kubenswrapper[4775]: I0123 14:19:59.785831 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63" (UID: "0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:19:59 crc kubenswrapper[4775]: I0123 14:19:59.803037 4775 scope.go:117] "RemoveContainer" containerID="554b1bc2f8959c11cce28ea694e21d41862ea18102ef0961167c2c12bb03ef3f" Jan 23 14:19:59 crc kubenswrapper[4775]: I0123 14:19:59.828788 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cszrm\" (UniqueName: \"kubernetes.io/projected/0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63-kube-api-access-cszrm\") on node \"crc\" DevicePath \"\"" Jan 23 14:19:59 crc kubenswrapper[4775]: I0123 14:19:59.828951 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:19:59 crc kubenswrapper[4775]: I0123 14:19:59.828961 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:19:59 crc kubenswrapper[4775]: I0123 14:19:59.921991 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-28swh" Jan 23 14:20:00 crc kubenswrapper[4775]: I0123 14:20:00.031251 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cl9q6\" (UniqueName: \"kubernetes.io/projected/10fc232f-aecc-4d2b-9dd2-48723f0a0cd6-kube-api-access-cl9q6\") pod \"10fc232f-aecc-4d2b-9dd2-48723f0a0cd6\" (UID: \"10fc232f-aecc-4d2b-9dd2-48723f0a0cd6\") " Jan 23 14:20:00 crc kubenswrapper[4775]: I0123 14:20:00.031371 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10fc232f-aecc-4d2b-9dd2-48723f0a0cd6-utilities\") pod \"10fc232f-aecc-4d2b-9dd2-48723f0a0cd6\" (UID: \"10fc232f-aecc-4d2b-9dd2-48723f0a0cd6\") " Jan 23 14:20:00 crc kubenswrapper[4775]: I0123 14:20:00.031527 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10fc232f-aecc-4d2b-9dd2-48723f0a0cd6-catalog-content\") pod \"10fc232f-aecc-4d2b-9dd2-48723f0a0cd6\" (UID: \"10fc232f-aecc-4d2b-9dd2-48723f0a0cd6\") " Jan 23 14:20:00 crc kubenswrapper[4775]: I0123 14:20:00.032781 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10fc232f-aecc-4d2b-9dd2-48723f0a0cd6-utilities" (OuterVolumeSpecName: "utilities") pod "10fc232f-aecc-4d2b-9dd2-48723f0a0cd6" (UID: "10fc232f-aecc-4d2b-9dd2-48723f0a0cd6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:20:00 crc kubenswrapper[4775]: I0123 14:20:00.037272 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10fc232f-aecc-4d2b-9dd2-48723f0a0cd6-kube-api-access-cl9q6" (OuterVolumeSpecName: "kube-api-access-cl9q6") pod "10fc232f-aecc-4d2b-9dd2-48723f0a0cd6" (UID: "10fc232f-aecc-4d2b-9dd2-48723f0a0cd6"). InnerVolumeSpecName "kube-api-access-cl9q6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:20:00 crc kubenswrapper[4775]: I0123 14:20:00.075913 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qngpp"] Jan 23 14:20:00 crc kubenswrapper[4775]: I0123 14:20:00.083458 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qngpp"] Jan 23 14:20:00 crc kubenswrapper[4775]: I0123 14:20:00.125776 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10fc232f-aecc-4d2b-9dd2-48723f0a0cd6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "10fc232f-aecc-4d2b-9dd2-48723f0a0cd6" (UID: "10fc232f-aecc-4d2b-9dd2-48723f0a0cd6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:20:00 crc kubenswrapper[4775]: I0123 14:20:00.133413 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10fc232f-aecc-4d2b-9dd2-48723f0a0cd6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:20:00 crc kubenswrapper[4775]: I0123 14:20:00.133453 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cl9q6\" (UniqueName: \"kubernetes.io/projected/10fc232f-aecc-4d2b-9dd2-48723f0a0cd6-kube-api-access-cl9q6\") on node \"crc\" DevicePath \"\"" Jan 23 14:20:00 crc kubenswrapper[4775]: I0123 14:20:00.133470 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10fc232f-aecc-4d2b-9dd2-48723f0a0cd6-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:20:00 crc kubenswrapper[4775]: I0123 14:20:00.751567 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nmjlg" event={"ID":"665532a6-49a8-4928-b5e1-909ac58bf7e8","Type":"ContainerStarted","Data":"a13fe513a735533b95506eebc589bb4bfb9fa48cb49c9a4919b7f7ce1307f660"} Jan 23 14:20:00 crc kubenswrapper[4775]: I0123 14:20:00.755406 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-28swh" event={"ID":"10fc232f-aecc-4d2b-9dd2-48723f0a0cd6","Type":"ContainerDied","Data":"633fbf5393f98d989746185197ee37b28ecb217d8912287bc04eb2dc32f94dd0"} Jan 23 14:20:00 crc kubenswrapper[4775]: I0123 14:20:00.755576 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-28swh" Jan 23 14:20:00 crc kubenswrapper[4775]: I0123 14:20:00.755728 4775 scope.go:117] "RemoveContainer" containerID="dde2078b2220981090dffac8d417342ba6d34ddd4114ab003180a42263594aaa" Jan 23 14:20:00 crc kubenswrapper[4775]: I0123 14:20:00.786188 4775 scope.go:117] "RemoveContainer" containerID="b5b4183c4ad06b1c793fb4e19eb9cdd431330d6579e5c0ef66d97c8549fc3156" Jan 23 14:20:00 crc kubenswrapper[4775]: I0123 14:20:00.802864 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nmjlg" podStartSLOduration=2.144473056 podStartE2EDuration="9.802786205s" podCreationTimestamp="2026-01-23 14:19:51 +0000 UTC" firstStartedPulling="2026-01-23 14:19:52.027543675 +0000 UTC m=+939.022372415" lastFinishedPulling="2026-01-23 14:19:59.685856814 +0000 UTC m=+946.680685564" observedRunningTime="2026-01-23 14:20:00.775950836 +0000 UTC m=+947.770779586" watchObservedRunningTime="2026-01-23 14:20:00.802786205 +0000 UTC m=+947.797614985" Jan 23 14:20:00 crc kubenswrapper[4775]: I0123 14:20:00.809855 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-28swh"] Jan 23 14:20:00 crc kubenswrapper[4775]: I0123 14:20:00.817054 4775 scope.go:117] "RemoveContainer" containerID="734653dab9d52bff0f3497315e73dde164639ca66bbedaae913a7a71ae66a1e6" Jan 23 14:20:00 crc kubenswrapper[4775]: I0123 14:20:00.822564 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-28swh"] Jan 23 14:20:01 crc kubenswrapper[4775]: I0123 14:20:01.731589 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63" path="/var/lib/kubelet/pods/0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63/volumes" Jan 23 14:20:01 crc kubenswrapper[4775]: I0123 14:20:01.734084 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10fc232f-aecc-4d2b-9dd2-48723f0a0cd6" path="/var/lib/kubelet/pods/10fc232f-aecc-4d2b-9dd2-48723f0a0cd6/volumes" Jan 23 14:20:03 crc kubenswrapper[4775]: I0123 14:20:03.899024 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-w6lsn"] Jan 23 14:20:03 crc kubenswrapper[4775]: E0123 14:20:03.899514 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63" containerName="extract-content" Jan 23 14:20:03 crc kubenswrapper[4775]: I0123 14:20:03.899529 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63" containerName="extract-content" Jan 23 14:20:03 crc kubenswrapper[4775]: E0123 14:20:03.899546 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63" containerName="registry-server" Jan 23 14:20:03 crc kubenswrapper[4775]: I0123 14:20:03.899554 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63" containerName="registry-server" Jan 23 14:20:03 crc kubenswrapper[4775]: E0123 14:20:03.899564 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10fc232f-aecc-4d2b-9dd2-48723f0a0cd6" containerName="extract-content" Jan 23 14:20:03 crc kubenswrapper[4775]: I0123 14:20:03.899575 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="10fc232f-aecc-4d2b-9dd2-48723f0a0cd6" containerName="extract-content" Jan 23 14:20:03 crc kubenswrapper[4775]: E0123 14:20:03.899586 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10fc232f-aecc-4d2b-9dd2-48723f0a0cd6" containerName="extract-utilities" Jan 23 14:20:03 crc kubenswrapper[4775]: I0123 14:20:03.899594 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="10fc232f-aecc-4d2b-9dd2-48723f0a0cd6" containerName="extract-utilities" Jan 23 14:20:03 crc kubenswrapper[4775]: E0123 14:20:03.899607 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63" containerName="extract-utilities" Jan 23 14:20:03 crc kubenswrapper[4775]: I0123 14:20:03.899614 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63" containerName="extract-utilities" Jan 23 14:20:03 crc kubenswrapper[4775]: E0123 14:20:03.899632 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10fc232f-aecc-4d2b-9dd2-48723f0a0cd6" containerName="registry-server" Jan 23 14:20:03 crc kubenswrapper[4775]: I0123 14:20:03.899639 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="10fc232f-aecc-4d2b-9dd2-48723f0a0cd6" containerName="registry-server" Jan 23 14:20:03 crc kubenswrapper[4775]: I0123 14:20:03.899757 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fbbcf2a-ba2d-45a2-ab13-7fdc90d94c63" containerName="registry-server" Jan 23 14:20:03 crc kubenswrapper[4775]: I0123 14:20:03.899780 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="10fc232f-aecc-4d2b-9dd2-48723f0a0cd6" containerName="registry-server" Jan 23 14:20:03 crc kubenswrapper[4775]: I0123 14:20:03.900223 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-w6lsn" Jan 23 14:20:03 crc kubenswrapper[4775]: I0123 14:20:03.902017 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 23 14:20:03 crc kubenswrapper[4775]: I0123 14:20:03.902127 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 23 14:20:03 crc kubenswrapper[4775]: I0123 14:20:03.902452 4775 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-cv6mh" Jan 23 14:20:03 crc kubenswrapper[4775]: I0123 14:20:03.917107 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-w6lsn"] Jan 23 14:20:03 crc kubenswrapper[4775]: I0123 14:20:03.985031 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3613a1b4-54b6-4a47-988a-a6624d530636-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-w6lsn\" (UID: \"3613a1b4-54b6-4a47-988a-a6624d530636\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-w6lsn" Jan 23 14:20:03 crc kubenswrapper[4775]: I0123 14:20:03.985077 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl6mp\" (UniqueName: \"kubernetes.io/projected/3613a1b4-54b6-4a47-988a-a6624d530636-kube-api-access-cl6mp\") pod \"cert-manager-webhook-f4fb5df64-w6lsn\" (UID: \"3613a1b4-54b6-4a47-988a-a6624d530636\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-w6lsn" Jan 23 14:20:04 crc kubenswrapper[4775]: I0123 14:20:04.086531 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3613a1b4-54b6-4a47-988a-a6624d530636-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-w6lsn\" (UID: \"3613a1b4-54b6-4a47-988a-a6624d530636\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-w6lsn" Jan 23 14:20:04 crc kubenswrapper[4775]: I0123 14:20:04.086590 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl6mp\" (UniqueName: \"kubernetes.io/projected/3613a1b4-54b6-4a47-988a-a6624d530636-kube-api-access-cl6mp\") pod \"cert-manager-webhook-f4fb5df64-w6lsn\" (UID: \"3613a1b4-54b6-4a47-988a-a6624d530636\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-w6lsn" Jan 23 14:20:04 crc kubenswrapper[4775]: I0123 14:20:04.108761 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl6mp\" (UniqueName: \"kubernetes.io/projected/3613a1b4-54b6-4a47-988a-a6624d530636-kube-api-access-cl6mp\") pod \"cert-manager-webhook-f4fb5df64-w6lsn\" (UID: \"3613a1b4-54b6-4a47-988a-a6624d530636\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-w6lsn" Jan 23 14:20:04 crc kubenswrapper[4775]: I0123 14:20:04.113605 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3613a1b4-54b6-4a47-988a-a6624d530636-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-w6lsn\" (UID: \"3613a1b4-54b6-4a47-988a-a6624d530636\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-w6lsn" Jan 23 14:20:04 crc kubenswrapper[4775]: I0123 14:20:04.216072 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-w6lsn" Jan 23 14:20:04 crc kubenswrapper[4775]: I0123 14:20:04.445254 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-w6lsn"] Jan 23 14:20:04 crc kubenswrapper[4775]: I0123 14:20:04.778178 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-w6lsn" event={"ID":"3613a1b4-54b6-4a47-988a-a6624d530636","Type":"ContainerStarted","Data":"f1681bff1fd4cb62e3b9069e3a92f32f775c56a2291e72d7edc490a941292dfe"} Jan 23 14:20:06 crc kubenswrapper[4775]: I0123 14:20:06.754437 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-qsmln"] Jan 23 14:20:06 crc kubenswrapper[4775]: I0123 14:20:06.756044 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-qsmln" Jan 23 14:20:06 crc kubenswrapper[4775]: I0123 14:20:06.761890 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-qsmln"] Jan 23 14:20:06 crc kubenswrapper[4775]: I0123 14:20:06.762905 4775 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-b4sr7" Jan 23 14:20:06 crc kubenswrapper[4775]: I0123 14:20:06.827285 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk7cm\" (UniqueName: \"kubernetes.io/projected/620134d3-d230-4c5b-8aaf-4213bcba307c-kube-api-access-lk7cm\") pod \"cert-manager-cainjector-855d9ccff4-qsmln\" (UID: \"620134d3-d230-4c5b-8aaf-4213bcba307c\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-qsmln" Jan 23 14:20:06 crc kubenswrapper[4775]: I0123 14:20:06.827346 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/620134d3-d230-4c5b-8aaf-4213bcba307c-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-qsmln\" (UID: \"620134d3-d230-4c5b-8aaf-4213bcba307c\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-qsmln" Jan 23 14:20:06 crc kubenswrapper[4775]: I0123 14:20:06.928621 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk7cm\" (UniqueName: \"kubernetes.io/projected/620134d3-d230-4c5b-8aaf-4213bcba307c-kube-api-access-lk7cm\") pod \"cert-manager-cainjector-855d9ccff4-qsmln\" (UID: \"620134d3-d230-4c5b-8aaf-4213bcba307c\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-qsmln" Jan 23 14:20:06 crc kubenswrapper[4775]: I0123 14:20:06.928732 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/620134d3-d230-4c5b-8aaf-4213bcba307c-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-qsmln\" (UID: \"620134d3-d230-4c5b-8aaf-4213bcba307c\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-qsmln" Jan 23 14:20:06 crc kubenswrapper[4775]: I0123 14:20:06.965108 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk7cm\" (UniqueName: \"kubernetes.io/projected/620134d3-d230-4c5b-8aaf-4213bcba307c-kube-api-access-lk7cm\") pod \"cert-manager-cainjector-855d9ccff4-qsmln\" (UID: \"620134d3-d230-4c5b-8aaf-4213bcba307c\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-qsmln" Jan 23 14:20:06 crc kubenswrapper[4775]: I0123 14:20:06.966102 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/620134d3-d230-4c5b-8aaf-4213bcba307c-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-qsmln\" (UID: \"620134d3-d230-4c5b-8aaf-4213bcba307c\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-qsmln" Jan 23 14:20:07 crc kubenswrapper[4775]: I0123 14:20:07.081482 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-qsmln" Jan 23 14:20:07 crc kubenswrapper[4775]: I0123 14:20:07.507893 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-qsmln"] Jan 23 14:20:07 crc kubenswrapper[4775]: W0123 14:20:07.520077 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod620134d3_d230_4c5b_8aaf_4213bcba307c.slice/crio-9fa17eb4ee79f3cd2796c4e26ff3ff2b93f76514f9375de1060a6defb638f246 WatchSource:0}: Error finding container 9fa17eb4ee79f3cd2796c4e26ff3ff2b93f76514f9375de1060a6defb638f246: Status 404 returned error can't find the container with id 9fa17eb4ee79f3cd2796c4e26ff3ff2b93f76514f9375de1060a6defb638f246 Jan 23 14:20:07 crc kubenswrapper[4775]: I0123 14:20:07.804901 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-qsmln" event={"ID":"620134d3-d230-4c5b-8aaf-4213bcba307c","Type":"ContainerStarted","Data":"9fa17eb4ee79f3cd2796c4e26ff3ff2b93f76514f9375de1060a6defb638f246"} Jan 23 14:20:11 crc kubenswrapper[4775]: I0123 14:20:11.843776 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xnjc2"] Jan 23 14:20:11 crc kubenswrapper[4775]: I0123 14:20:11.845385 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xnjc2" Jan 23 14:20:11 crc kubenswrapper[4775]: I0123 14:20:11.875496 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xnjc2"] Jan 23 14:20:12 crc kubenswrapper[4775]: I0123 14:20:12.022243 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/692e7969-32d8-473a-8e1c-22122a398b6b-catalog-content\") pod \"redhat-marketplace-xnjc2\" (UID: \"692e7969-32d8-473a-8e1c-22122a398b6b\") " pod="openshift-marketplace/redhat-marketplace-xnjc2" Jan 23 14:20:12 crc kubenswrapper[4775]: I0123 14:20:12.022337 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6xrz\" (UniqueName: \"kubernetes.io/projected/692e7969-32d8-473a-8e1c-22122a398b6b-kube-api-access-b6xrz\") pod \"redhat-marketplace-xnjc2\" (UID: \"692e7969-32d8-473a-8e1c-22122a398b6b\") " pod="openshift-marketplace/redhat-marketplace-xnjc2" Jan 23 14:20:12 crc kubenswrapper[4775]: I0123 14:20:12.022376 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/692e7969-32d8-473a-8e1c-22122a398b6b-utilities\") pod \"redhat-marketplace-xnjc2\" (UID: \"692e7969-32d8-473a-8e1c-22122a398b6b\") " pod="openshift-marketplace/redhat-marketplace-xnjc2" Jan 23 14:20:12 crc kubenswrapper[4775]: I0123 14:20:12.123918 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6xrz\" (UniqueName: \"kubernetes.io/projected/692e7969-32d8-473a-8e1c-22122a398b6b-kube-api-access-b6xrz\") pod \"redhat-marketplace-xnjc2\" (UID: \"692e7969-32d8-473a-8e1c-22122a398b6b\") " pod="openshift-marketplace/redhat-marketplace-xnjc2" Jan 23 14:20:12 crc kubenswrapper[4775]: I0123 14:20:12.123985 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/692e7969-32d8-473a-8e1c-22122a398b6b-utilities\") pod \"redhat-marketplace-xnjc2\" (UID: \"692e7969-32d8-473a-8e1c-22122a398b6b\") " pod="openshift-marketplace/redhat-marketplace-xnjc2" Jan 23 14:20:12 crc kubenswrapper[4775]: I0123 14:20:12.124055 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/692e7969-32d8-473a-8e1c-22122a398b6b-catalog-content\") pod \"redhat-marketplace-xnjc2\" (UID: \"692e7969-32d8-473a-8e1c-22122a398b6b\") " pod="openshift-marketplace/redhat-marketplace-xnjc2" Jan 23 14:20:12 crc kubenswrapper[4775]: I0123 14:20:12.124661 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/692e7969-32d8-473a-8e1c-22122a398b6b-catalog-content\") pod \"redhat-marketplace-xnjc2\" (UID: \"692e7969-32d8-473a-8e1c-22122a398b6b\") " pod="openshift-marketplace/redhat-marketplace-xnjc2" Jan 23 14:20:12 crc kubenswrapper[4775]: I0123 14:20:12.124924 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/692e7969-32d8-473a-8e1c-22122a398b6b-utilities\") pod \"redhat-marketplace-xnjc2\" (UID: \"692e7969-32d8-473a-8e1c-22122a398b6b\") " pod="openshift-marketplace/redhat-marketplace-xnjc2" Jan 23 14:20:12 crc kubenswrapper[4775]: I0123 14:20:12.146615 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6xrz\" (UniqueName: \"kubernetes.io/projected/692e7969-32d8-473a-8e1c-22122a398b6b-kube-api-access-b6xrz\") pod \"redhat-marketplace-xnjc2\" (UID: \"692e7969-32d8-473a-8e1c-22122a398b6b\") " pod="openshift-marketplace/redhat-marketplace-xnjc2" Jan 23 14:20:12 crc kubenswrapper[4775]: I0123 14:20:12.181200 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xnjc2" Jan 23 14:20:14 crc kubenswrapper[4775]: W0123 14:20:14.538966 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod692e7969_32d8_473a_8e1c_22122a398b6b.slice/crio-a62a35b6fa8b7f21190e89aec2597a4e21721e5064ed9a9af9186117b5aa5b02 WatchSource:0}: Error finding container a62a35b6fa8b7f21190e89aec2597a4e21721e5064ed9a9af9186117b5aa5b02: Status 404 returned error can't find the container with id a62a35b6fa8b7f21190e89aec2597a4e21721e5064ed9a9af9186117b5aa5b02 Jan 23 14:20:14 crc kubenswrapper[4775]: I0123 14:20:14.539876 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xnjc2"] Jan 23 14:20:15 crc kubenswrapper[4775]: I0123 14:20:15.542567 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-qsmln" event={"ID":"620134d3-d230-4c5b-8aaf-4213bcba307c","Type":"ContainerStarted","Data":"541e19b03608dbc973d3ea2fbd9e8d4dbbc13932ecd7b2503495e90b9d542c52"} Jan 23 14:20:15 crc kubenswrapper[4775]: I0123 14:20:15.544751 4775 generic.go:334] "Generic (PLEG): container finished" podID="692e7969-32d8-473a-8e1c-22122a398b6b" containerID="3c9bdb3ee97168179f059039bbbf9a4917bea48cc6d646023a917d19e5dbc247" exitCode=0 Jan 23 14:20:15 crc kubenswrapper[4775]: I0123 14:20:15.544812 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xnjc2" event={"ID":"692e7969-32d8-473a-8e1c-22122a398b6b","Type":"ContainerDied","Data":"3c9bdb3ee97168179f059039bbbf9a4917bea48cc6d646023a917d19e5dbc247"} Jan 23 14:20:15 crc kubenswrapper[4775]: I0123 14:20:15.544862 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xnjc2" event={"ID":"692e7969-32d8-473a-8e1c-22122a398b6b","Type":"ContainerStarted","Data":"a62a35b6fa8b7f21190e89aec2597a4e21721e5064ed9a9af9186117b5aa5b02"} Jan 23 14:20:15 crc kubenswrapper[4775]: I0123 14:20:15.546880 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-w6lsn" event={"ID":"3613a1b4-54b6-4a47-988a-a6624d530636","Type":"ContainerStarted","Data":"0d3d04e71da29033a048f89ed0f44fbfe0349d6eb20f86e4b597408dc16f9b20"} Jan 23 14:20:15 crc kubenswrapper[4775]: I0123 14:20:15.547026 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-w6lsn" Jan 23 14:20:15 crc kubenswrapper[4775]: I0123 14:20:15.562125 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-qsmln" podStartSLOduration=3.639717895 podStartE2EDuration="9.56210854s" podCreationTimestamp="2026-01-23 14:20:06 +0000 UTC" firstStartedPulling="2026-01-23 14:20:07.522184229 +0000 UTC m=+954.517012969" lastFinishedPulling="2026-01-23 14:20:13.444574874 +0000 UTC m=+960.439403614" observedRunningTime="2026-01-23 14:20:15.560310358 +0000 UTC m=+962.555139098" watchObservedRunningTime="2026-01-23 14:20:15.56210854 +0000 UTC m=+962.556937280" Jan 23 14:20:15 crc kubenswrapper[4775]: I0123 14:20:15.605385 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-w6lsn" podStartSLOduration=3.569814622 podStartE2EDuration="12.605367175s" podCreationTimestamp="2026-01-23 14:20:03 +0000 UTC" firstStartedPulling="2026-01-23 14:20:04.454445049 +0000 UTC m=+951.449273799" lastFinishedPulling="2026-01-23 14:20:13.489997612 +0000 UTC m=+960.484826352" observedRunningTime="2026-01-23 14:20:15.604986414 +0000 UTC m=+962.599815154" watchObservedRunningTime="2026-01-23 14:20:15.605367175 +0000 UTC m=+962.600195915" Jan 23 14:20:16 crc kubenswrapper[4775]: I0123 14:20:16.554776 4775 generic.go:334] "Generic (PLEG): container finished" podID="692e7969-32d8-473a-8e1c-22122a398b6b" containerID="38befce2410f86731c0e2b3323874ab187ce997d16ea3c52af26e1be11e45f7d" exitCode=0 Jan 23 14:20:16 crc kubenswrapper[4775]: I0123 14:20:16.554898 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xnjc2" event={"ID":"692e7969-32d8-473a-8e1c-22122a398b6b","Type":"ContainerDied","Data":"38befce2410f86731c0e2b3323874ab187ce997d16ea3c52af26e1be11e45f7d"} Jan 23 14:20:17 crc kubenswrapper[4775]: I0123 14:20:17.562458 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xnjc2" event={"ID":"692e7969-32d8-473a-8e1c-22122a398b6b","Type":"ContainerStarted","Data":"143dc3d685be3156e9d2a3aa85cb50d0c15e2ed2c8726d48309f751495f122ac"} Jan 23 14:20:17 crc kubenswrapper[4775]: I0123 14:20:17.597345 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xnjc2" podStartSLOduration=5.085140578 podStartE2EDuration="6.597325429s" podCreationTimestamp="2026-01-23 14:20:11 +0000 UTC" firstStartedPulling="2026-01-23 14:20:15.546653522 +0000 UTC m=+962.541482262" lastFinishedPulling="2026-01-23 14:20:17.058838363 +0000 UTC m=+964.053667113" observedRunningTime="2026-01-23 14:20:17.595420883 +0000 UTC m=+964.590249623" watchObservedRunningTime="2026-01-23 14:20:17.597325429 +0000 UTC m=+964.592154169" Jan 23 14:20:19 crc kubenswrapper[4775]: I0123 14:20:19.219667 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-w6lsn" Jan 23 14:20:21 crc kubenswrapper[4775]: I0123 14:20:21.830547 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-dzfhf"] Jan 23 14:20:21 crc kubenswrapper[4775]: I0123 14:20:21.832639 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-dzfhf" Jan 23 14:20:21 crc kubenswrapper[4775]: I0123 14:20:21.837329 4775 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-2tmb2" Jan 23 14:20:21 crc kubenswrapper[4775]: I0123 14:20:21.843453 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-dzfhf"] Jan 23 14:20:22 crc kubenswrapper[4775]: I0123 14:20:22.004119 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z4r9\" (UniqueName: \"kubernetes.io/projected/2a26d984-5abe-44ce-ad1e-25842b8f7e51-kube-api-access-7z4r9\") pod \"cert-manager-86cb77c54b-dzfhf\" (UID: \"2a26d984-5abe-44ce-ad1e-25842b8f7e51\") " pod="cert-manager/cert-manager-86cb77c54b-dzfhf" Jan 23 14:20:22 crc kubenswrapper[4775]: I0123 14:20:22.004169 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2a26d984-5abe-44ce-ad1e-25842b8f7e51-bound-sa-token\") pod \"cert-manager-86cb77c54b-dzfhf\" (UID: \"2a26d984-5abe-44ce-ad1e-25842b8f7e51\") " pod="cert-manager/cert-manager-86cb77c54b-dzfhf" Jan 23 14:20:22 crc kubenswrapper[4775]: I0123 14:20:22.105703 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z4r9\" (UniqueName: \"kubernetes.io/projected/2a26d984-5abe-44ce-ad1e-25842b8f7e51-kube-api-access-7z4r9\") pod \"cert-manager-86cb77c54b-dzfhf\" (UID: \"2a26d984-5abe-44ce-ad1e-25842b8f7e51\") " pod="cert-manager/cert-manager-86cb77c54b-dzfhf" Jan 23 14:20:22 crc kubenswrapper[4775]: I0123 14:20:22.105790 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2a26d984-5abe-44ce-ad1e-25842b8f7e51-bound-sa-token\") pod \"cert-manager-86cb77c54b-dzfhf\" (UID: \"2a26d984-5abe-44ce-ad1e-25842b8f7e51\") " pod="cert-manager/cert-manager-86cb77c54b-dzfhf" Jan 23 14:20:22 crc kubenswrapper[4775]: I0123 14:20:22.130169 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2a26d984-5abe-44ce-ad1e-25842b8f7e51-bound-sa-token\") pod \"cert-manager-86cb77c54b-dzfhf\" (UID: \"2a26d984-5abe-44ce-ad1e-25842b8f7e51\") " pod="cert-manager/cert-manager-86cb77c54b-dzfhf" Jan 23 14:20:22 crc kubenswrapper[4775]: I0123 14:20:22.130397 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z4r9\" (UniqueName: \"kubernetes.io/projected/2a26d984-5abe-44ce-ad1e-25842b8f7e51-kube-api-access-7z4r9\") pod \"cert-manager-86cb77c54b-dzfhf\" (UID: \"2a26d984-5abe-44ce-ad1e-25842b8f7e51\") " pod="cert-manager/cert-manager-86cb77c54b-dzfhf" Jan 23 14:20:22 crc kubenswrapper[4775]: I0123 14:20:22.164934 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-dzfhf" Jan 23 14:20:22 crc kubenswrapper[4775]: I0123 14:20:22.182359 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xnjc2" Jan 23 14:20:22 crc kubenswrapper[4775]: I0123 14:20:22.182394 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xnjc2" Jan 23 14:20:22 crc kubenswrapper[4775]: I0123 14:20:22.243169 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xnjc2" Jan 23 14:20:22 crc kubenswrapper[4775]: I0123 14:20:22.575299 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-dzfhf"] Jan 23 14:20:22 crc kubenswrapper[4775]: I0123 14:20:22.608428 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-dzfhf" event={"ID":"2a26d984-5abe-44ce-ad1e-25842b8f7e51","Type":"ContainerStarted","Data":"76608cd634deb86296a1f8b61789982cb5efa5a729f506ad2806b80067d1bca1"} Jan 23 14:20:22 crc kubenswrapper[4775]: I0123 14:20:22.670320 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xnjc2" Jan 23 14:20:22 crc kubenswrapper[4775]: I0123 14:20:22.724716 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xnjc2"] Jan 23 14:20:23 crc kubenswrapper[4775]: I0123 14:20:23.619420 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-dzfhf" event={"ID":"2a26d984-5abe-44ce-ad1e-25842b8f7e51","Type":"ContainerStarted","Data":"768b510429bf3bf9266ad9b99d279d4dee3e9b1c53590925f4b62ff38ebf5de8"} Jan 23 14:20:23 crc kubenswrapper[4775]: I0123 14:20:23.653786 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-dzfhf" podStartSLOduration=2.653755204 podStartE2EDuration="2.653755204s" podCreationTimestamp="2026-01-23 14:20:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:20:23.640846279 +0000 UTC m=+970.635675019" watchObservedRunningTime="2026-01-23 14:20:23.653755204 +0000 UTC m=+970.648583974" Jan 23 14:20:24 crc kubenswrapper[4775]: I0123 14:20:24.626409 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xnjc2" podUID="692e7969-32d8-473a-8e1c-22122a398b6b" containerName="registry-server" containerID="cri-o://143dc3d685be3156e9d2a3aa85cb50d0c15e2ed2c8726d48309f751495f122ac" gracePeriod=2 Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.069901 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xnjc2" Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.250623 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6xrz\" (UniqueName: \"kubernetes.io/projected/692e7969-32d8-473a-8e1c-22122a398b6b-kube-api-access-b6xrz\") pod \"692e7969-32d8-473a-8e1c-22122a398b6b\" (UID: \"692e7969-32d8-473a-8e1c-22122a398b6b\") " Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.250723 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/692e7969-32d8-473a-8e1c-22122a398b6b-catalog-content\") pod \"692e7969-32d8-473a-8e1c-22122a398b6b\" (UID: \"692e7969-32d8-473a-8e1c-22122a398b6b\") " Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.250791 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/692e7969-32d8-473a-8e1c-22122a398b6b-utilities\") pod \"692e7969-32d8-473a-8e1c-22122a398b6b\" (UID: \"692e7969-32d8-473a-8e1c-22122a398b6b\") " Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.252534 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/692e7969-32d8-473a-8e1c-22122a398b6b-utilities" (OuterVolumeSpecName: "utilities") pod "692e7969-32d8-473a-8e1c-22122a398b6b" (UID: "692e7969-32d8-473a-8e1c-22122a398b6b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.262079 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/692e7969-32d8-473a-8e1c-22122a398b6b-kube-api-access-b6xrz" (OuterVolumeSpecName: "kube-api-access-b6xrz") pod "692e7969-32d8-473a-8e1c-22122a398b6b" (UID: "692e7969-32d8-473a-8e1c-22122a398b6b"). InnerVolumeSpecName "kube-api-access-b6xrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.294226 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/692e7969-32d8-473a-8e1c-22122a398b6b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "692e7969-32d8-473a-8e1c-22122a398b6b" (UID: "692e7969-32d8-473a-8e1c-22122a398b6b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.352736 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6xrz\" (UniqueName: \"kubernetes.io/projected/692e7969-32d8-473a-8e1c-22122a398b6b-kube-api-access-b6xrz\") on node \"crc\" DevicePath \"\"" Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.352789 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/692e7969-32d8-473a-8e1c-22122a398b6b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.352858 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/692e7969-32d8-473a-8e1c-22122a398b6b-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.633285 4775 generic.go:334] "Generic (PLEG): container finished" podID="692e7969-32d8-473a-8e1c-22122a398b6b" containerID="143dc3d685be3156e9d2a3aa85cb50d0c15e2ed2c8726d48309f751495f122ac" exitCode=0 Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.633323 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xnjc2" event={"ID":"692e7969-32d8-473a-8e1c-22122a398b6b","Type":"ContainerDied","Data":"143dc3d685be3156e9d2a3aa85cb50d0c15e2ed2c8726d48309f751495f122ac"} Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.633352 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xnjc2" event={"ID":"692e7969-32d8-473a-8e1c-22122a398b6b","Type":"ContainerDied","Data":"a62a35b6fa8b7f21190e89aec2597a4e21721e5064ed9a9af9186117b5aa5b02"} Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.633351 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xnjc2" Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.633429 4775 scope.go:117] "RemoveContainer" containerID="143dc3d685be3156e9d2a3aa85cb50d0c15e2ed2c8726d48309f751495f122ac" Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.649155 4775 scope.go:117] "RemoveContainer" containerID="38befce2410f86731c0e2b3323874ab187ce997d16ea3c52af26e1be11e45f7d" Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.659340 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xnjc2"] Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.669728 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xnjc2"] Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.677511 4775 scope.go:117] "RemoveContainer" containerID="3c9bdb3ee97168179f059039bbbf9a4917bea48cc6d646023a917d19e5dbc247" Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.690622 4775 scope.go:117] "RemoveContainer" containerID="143dc3d685be3156e9d2a3aa85cb50d0c15e2ed2c8726d48309f751495f122ac" Jan 23 14:20:25 crc kubenswrapper[4775]: E0123 14:20:25.691159 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"143dc3d685be3156e9d2a3aa85cb50d0c15e2ed2c8726d48309f751495f122ac\": container with ID starting with 143dc3d685be3156e9d2a3aa85cb50d0c15e2ed2c8726d48309f751495f122ac not found: ID does not exist" containerID="143dc3d685be3156e9d2a3aa85cb50d0c15e2ed2c8726d48309f751495f122ac" Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.691188 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"143dc3d685be3156e9d2a3aa85cb50d0c15e2ed2c8726d48309f751495f122ac"} err="failed to get container status \"143dc3d685be3156e9d2a3aa85cb50d0c15e2ed2c8726d48309f751495f122ac\": rpc error: code = NotFound desc = could not find container \"143dc3d685be3156e9d2a3aa85cb50d0c15e2ed2c8726d48309f751495f122ac\": container with ID starting with 143dc3d685be3156e9d2a3aa85cb50d0c15e2ed2c8726d48309f751495f122ac not found: ID does not exist" Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.691212 4775 scope.go:117] "RemoveContainer" containerID="38befce2410f86731c0e2b3323874ab187ce997d16ea3c52af26e1be11e45f7d" Jan 23 14:20:25 crc kubenswrapper[4775]: E0123 14:20:25.691499 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38befce2410f86731c0e2b3323874ab187ce997d16ea3c52af26e1be11e45f7d\": container with ID starting with 38befce2410f86731c0e2b3323874ab187ce997d16ea3c52af26e1be11e45f7d not found: ID does not exist" containerID="38befce2410f86731c0e2b3323874ab187ce997d16ea3c52af26e1be11e45f7d" Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.691552 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38befce2410f86731c0e2b3323874ab187ce997d16ea3c52af26e1be11e45f7d"} err="failed to get container status \"38befce2410f86731c0e2b3323874ab187ce997d16ea3c52af26e1be11e45f7d\": rpc error: code = NotFound desc = could not find container \"38befce2410f86731c0e2b3323874ab187ce997d16ea3c52af26e1be11e45f7d\": container with ID starting with 38befce2410f86731c0e2b3323874ab187ce997d16ea3c52af26e1be11e45f7d not found: ID does not exist" Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.691587 4775 scope.go:117] "RemoveContainer" containerID="3c9bdb3ee97168179f059039bbbf9a4917bea48cc6d646023a917d19e5dbc247" Jan 23 14:20:25 crc kubenswrapper[4775]: E0123 14:20:25.691930 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c9bdb3ee97168179f059039bbbf9a4917bea48cc6d646023a917d19e5dbc247\": container with ID starting with 3c9bdb3ee97168179f059039bbbf9a4917bea48cc6d646023a917d19e5dbc247 not found: ID does not exist" containerID="3c9bdb3ee97168179f059039bbbf9a4917bea48cc6d646023a917d19e5dbc247" Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.691960 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c9bdb3ee97168179f059039bbbf9a4917bea48cc6d646023a917d19e5dbc247"} err="failed to get container status \"3c9bdb3ee97168179f059039bbbf9a4917bea48cc6d646023a917d19e5dbc247\": rpc error: code = NotFound desc = could not find container \"3c9bdb3ee97168179f059039bbbf9a4917bea48cc6d646023a917d19e5dbc247\": container with ID starting with 3c9bdb3ee97168179f059039bbbf9a4917bea48cc6d646023a917d19e5dbc247 not found: ID does not exist" Jan 23 14:20:25 crc kubenswrapper[4775]: I0123 14:20:25.722262 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="692e7969-32d8-473a-8e1c-22122a398b6b" path="/var/lib/kubelet/pods/692e7969-32d8-473a-8e1c-22122a398b6b/volumes" Jan 23 14:20:32 crc kubenswrapper[4775]: I0123 14:20:32.780069 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-zsmtn"] Jan 23 14:20:32 crc kubenswrapper[4775]: E0123 14:20:32.780700 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="692e7969-32d8-473a-8e1c-22122a398b6b" containerName="extract-utilities" Jan 23 14:20:32 crc kubenswrapper[4775]: I0123 14:20:32.780720 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="692e7969-32d8-473a-8e1c-22122a398b6b" containerName="extract-utilities" Jan 23 14:20:32 crc kubenswrapper[4775]: E0123 14:20:32.780748 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="692e7969-32d8-473a-8e1c-22122a398b6b" containerName="extract-content" Jan 23 14:20:32 crc kubenswrapper[4775]: I0123 14:20:32.780761 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="692e7969-32d8-473a-8e1c-22122a398b6b" containerName="extract-content" Jan 23 14:20:32 crc kubenswrapper[4775]: E0123 14:20:32.780776 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="692e7969-32d8-473a-8e1c-22122a398b6b" containerName="registry-server" Jan 23 14:20:32 crc kubenswrapper[4775]: I0123 14:20:32.780788 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="692e7969-32d8-473a-8e1c-22122a398b6b" containerName="registry-server" Jan 23 14:20:32 crc kubenswrapper[4775]: I0123 14:20:32.781000 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="692e7969-32d8-473a-8e1c-22122a398b6b" containerName="registry-server" Jan 23 14:20:32 crc kubenswrapper[4775]: I0123 14:20:32.781724 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zsmtn" Jan 23 14:20:32 crc kubenswrapper[4775]: I0123 14:20:32.791434 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-nht2h" Jan 23 14:20:32 crc kubenswrapper[4775]: I0123 14:20:32.791599 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 23 14:20:32 crc kubenswrapper[4775]: I0123 14:20:32.792039 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 23 14:20:32 crc kubenswrapper[4775]: I0123 14:20:32.803375 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-zsmtn"] Jan 23 14:20:32 crc kubenswrapper[4775]: I0123 14:20:32.864762 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stljm\" (UniqueName: \"kubernetes.io/projected/24d6c5ef-17e4-48d8-ab0c-d5909563e217-kube-api-access-stljm\") pod \"openstack-operator-index-zsmtn\" (UID: \"24d6c5ef-17e4-48d8-ab0c-d5909563e217\") " pod="openstack-operators/openstack-operator-index-zsmtn" Jan 23 14:20:32 crc kubenswrapper[4775]: I0123 14:20:32.966634 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stljm\" (UniqueName: \"kubernetes.io/projected/24d6c5ef-17e4-48d8-ab0c-d5909563e217-kube-api-access-stljm\") pod \"openstack-operator-index-zsmtn\" (UID: \"24d6c5ef-17e4-48d8-ab0c-d5909563e217\") " pod="openstack-operators/openstack-operator-index-zsmtn" Jan 23 14:20:32 crc kubenswrapper[4775]: I0123 14:20:32.984403 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stljm\" (UniqueName: \"kubernetes.io/projected/24d6c5ef-17e4-48d8-ab0c-d5909563e217-kube-api-access-stljm\") pod \"openstack-operator-index-zsmtn\" (UID: \"24d6c5ef-17e4-48d8-ab0c-d5909563e217\") " pod="openstack-operators/openstack-operator-index-zsmtn" Jan 23 14:20:33 crc kubenswrapper[4775]: I0123 14:20:33.128425 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zsmtn" Jan 23 14:20:33 crc kubenswrapper[4775]: I0123 14:20:33.399589 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-zsmtn"] Jan 23 14:20:33 crc kubenswrapper[4775]: W0123 14:20:33.412997 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24d6c5ef_17e4_48d8_ab0c_d5909563e217.slice/crio-9f3c0c7b10a79ed4b95f841087f9f80ff3d8670967feeee1c3dfeb8cb6945e8b WatchSource:0}: Error finding container 9f3c0c7b10a79ed4b95f841087f9f80ff3d8670967feeee1c3dfeb8cb6945e8b: Status 404 returned error can't find the container with id 9f3c0c7b10a79ed4b95f841087f9f80ff3d8670967feeee1c3dfeb8cb6945e8b Jan 23 14:20:33 crc kubenswrapper[4775]: I0123 14:20:33.689104 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zsmtn" event={"ID":"24d6c5ef-17e4-48d8-ab0c-d5909563e217","Type":"ContainerStarted","Data":"9f3c0c7b10a79ed4b95f841087f9f80ff3d8670967feeee1c3dfeb8cb6945e8b"} Jan 23 14:20:37 crc kubenswrapper[4775]: I0123 14:20:37.350994 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-zsmtn"] Jan 23 14:20:37 crc kubenswrapper[4775]: I0123 14:20:37.714934 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-zsmtn" podUID="24d6c5ef-17e4-48d8-ab0c-d5909563e217" containerName="registry-server" containerID="cri-o://56371e9dfb1845e9cc35ccf702bc0499d3b0c9dd260869288c8076d73c3c566d" gracePeriod=2 Jan 23 14:20:37 crc kubenswrapper[4775]: I0123 14:20:37.720464 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zsmtn" event={"ID":"24d6c5ef-17e4-48d8-ab0c-d5909563e217","Type":"ContainerStarted","Data":"56371e9dfb1845e9cc35ccf702bc0499d3b0c9dd260869288c8076d73c3c566d"} Jan 23 14:20:37 crc kubenswrapper[4775]: I0123 14:20:37.964296 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-zsmtn" podStartSLOduration=2.025417068 podStartE2EDuration="5.964268246s" podCreationTimestamp="2026-01-23 14:20:32 +0000 UTC" firstStartedPulling="2026-01-23 14:20:33.415199691 +0000 UTC m=+980.410028431" lastFinishedPulling="2026-01-23 14:20:37.354050869 +0000 UTC m=+984.348879609" observedRunningTime="2026-01-23 14:20:37.738971378 +0000 UTC m=+984.733800118" watchObservedRunningTime="2026-01-23 14:20:37.964268246 +0000 UTC m=+984.959097026" Jan 23 14:20:37 crc kubenswrapper[4775]: I0123 14:20:37.972371 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-5czdz"] Jan 23 14:20:37 crc kubenswrapper[4775]: I0123 14:20:37.974025 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5czdz" Jan 23 14:20:37 crc kubenswrapper[4775]: I0123 14:20:37.984721 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-5czdz"] Jan 23 14:20:38 crc kubenswrapper[4775]: I0123 14:20:38.130372 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-547hs\" (UniqueName: \"kubernetes.io/projected/a0ddc210-ca29-42e4-a4c2-a07881434fed-kube-api-access-547hs\") pod \"openstack-operator-index-5czdz\" (UID: \"a0ddc210-ca29-42e4-a4c2-a07881434fed\") " pod="openstack-operators/openstack-operator-index-5czdz" Jan 23 14:20:38 crc kubenswrapper[4775]: I0123 14:20:38.167693 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zsmtn" Jan 23 14:20:38 crc kubenswrapper[4775]: I0123 14:20:38.231991 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-547hs\" (UniqueName: \"kubernetes.io/projected/a0ddc210-ca29-42e4-a4c2-a07881434fed-kube-api-access-547hs\") pod \"openstack-operator-index-5czdz\" (UID: \"a0ddc210-ca29-42e4-a4c2-a07881434fed\") " pod="openstack-operators/openstack-operator-index-5czdz" Jan 23 14:20:38 crc kubenswrapper[4775]: I0123 14:20:38.251959 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-547hs\" (UniqueName: \"kubernetes.io/projected/a0ddc210-ca29-42e4-a4c2-a07881434fed-kube-api-access-547hs\") pod \"openstack-operator-index-5czdz\" (UID: \"a0ddc210-ca29-42e4-a4c2-a07881434fed\") " pod="openstack-operators/openstack-operator-index-5czdz" Jan 23 14:20:38 crc kubenswrapper[4775]: I0123 14:20:38.306591 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-5czdz" Jan 23 14:20:38 crc kubenswrapper[4775]: I0123 14:20:38.333176 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stljm\" (UniqueName: \"kubernetes.io/projected/24d6c5ef-17e4-48d8-ab0c-d5909563e217-kube-api-access-stljm\") pod \"24d6c5ef-17e4-48d8-ab0c-d5909563e217\" (UID: \"24d6c5ef-17e4-48d8-ab0c-d5909563e217\") " Jan 23 14:20:38 crc kubenswrapper[4775]: I0123 14:20:38.338328 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24d6c5ef-17e4-48d8-ab0c-d5909563e217-kube-api-access-stljm" (OuterVolumeSpecName: "kube-api-access-stljm") pod "24d6c5ef-17e4-48d8-ab0c-d5909563e217" (UID: "24d6c5ef-17e4-48d8-ab0c-d5909563e217"). InnerVolumeSpecName "kube-api-access-stljm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:20:38 crc kubenswrapper[4775]: I0123 14:20:38.434686 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stljm\" (UniqueName: \"kubernetes.io/projected/24d6c5ef-17e4-48d8-ab0c-d5909563e217-kube-api-access-stljm\") on node \"crc\" DevicePath \"\"" Jan 23 14:20:38 crc kubenswrapper[4775]: I0123 14:20:38.547557 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-5czdz"] Jan 23 14:20:38 crc kubenswrapper[4775]: W0123 14:20:38.550966 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0ddc210_ca29_42e4_a4c2_a07881434fed.slice/crio-9b2975a6c6021d6aa16aa9925020342b62450e4557243c4bb1c36a1db95a2ba2 WatchSource:0}: Error finding container 9b2975a6c6021d6aa16aa9925020342b62450e4557243c4bb1c36a1db95a2ba2: Status 404 returned error can't find the container with id 9b2975a6c6021d6aa16aa9925020342b62450e4557243c4bb1c36a1db95a2ba2 Jan 23 14:20:38 crc kubenswrapper[4775]: I0123 14:20:38.726597 4775 generic.go:334] "Generic (PLEG): container finished" podID="24d6c5ef-17e4-48d8-ab0c-d5909563e217" containerID="56371e9dfb1845e9cc35ccf702bc0499d3b0c9dd260869288c8076d73c3c566d" exitCode=0 Jan 23 14:20:38 crc kubenswrapper[4775]: I0123 14:20:38.726705 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zsmtn" event={"ID":"24d6c5ef-17e4-48d8-ab0c-d5909563e217","Type":"ContainerDied","Data":"56371e9dfb1845e9cc35ccf702bc0499d3b0c9dd260869288c8076d73c3c566d"} Jan 23 14:20:38 crc kubenswrapper[4775]: I0123 14:20:38.726784 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zsmtn" event={"ID":"24d6c5ef-17e4-48d8-ab0c-d5909563e217","Type":"ContainerDied","Data":"9f3c0c7b10a79ed4b95f841087f9f80ff3d8670967feeee1c3dfeb8cb6945e8b"} Jan 23 14:20:38 crc kubenswrapper[4775]: I0123 14:20:38.726727 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zsmtn" Jan 23 14:20:38 crc kubenswrapper[4775]: I0123 14:20:38.726860 4775 scope.go:117] "RemoveContainer" containerID="56371e9dfb1845e9cc35ccf702bc0499d3b0c9dd260869288c8076d73c3c566d" Jan 23 14:20:38 crc kubenswrapper[4775]: I0123 14:20:38.731900 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5czdz" event={"ID":"a0ddc210-ca29-42e4-a4c2-a07881434fed","Type":"ContainerStarted","Data":"9b2975a6c6021d6aa16aa9925020342b62450e4557243c4bb1c36a1db95a2ba2"} Jan 23 14:20:38 crc kubenswrapper[4775]: I0123 14:20:38.759847 4775 scope.go:117] "RemoveContainer" containerID="56371e9dfb1845e9cc35ccf702bc0499d3b0c9dd260869288c8076d73c3c566d" Jan 23 14:20:38 crc kubenswrapper[4775]: E0123 14:20:38.760469 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56371e9dfb1845e9cc35ccf702bc0499d3b0c9dd260869288c8076d73c3c566d\": container with ID starting with 56371e9dfb1845e9cc35ccf702bc0499d3b0c9dd260869288c8076d73c3c566d not found: ID does not exist" containerID="56371e9dfb1845e9cc35ccf702bc0499d3b0c9dd260869288c8076d73c3c566d" Jan 23 14:20:38 crc kubenswrapper[4775]: I0123 14:20:38.760505 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56371e9dfb1845e9cc35ccf702bc0499d3b0c9dd260869288c8076d73c3c566d"} err="failed to get container status \"56371e9dfb1845e9cc35ccf702bc0499d3b0c9dd260869288c8076d73c3c566d\": rpc error: code = NotFound desc = could not find container \"56371e9dfb1845e9cc35ccf702bc0499d3b0c9dd260869288c8076d73c3c566d\": container with ID starting with 56371e9dfb1845e9cc35ccf702bc0499d3b0c9dd260869288c8076d73c3c566d not found: ID does not exist" Jan 23 14:20:38 crc kubenswrapper[4775]: I0123 14:20:38.787462 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-zsmtn"] Jan 23 14:20:38 crc kubenswrapper[4775]: I0123 14:20:38.797745 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-zsmtn"] Jan 23 14:20:39 crc kubenswrapper[4775]: I0123 14:20:39.726619 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24d6c5ef-17e4-48d8-ab0c-d5909563e217" path="/var/lib/kubelet/pods/24d6c5ef-17e4-48d8-ab0c-d5909563e217/volumes" Jan 23 14:20:39 crc kubenswrapper[4775]: I0123 14:20:39.748149 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-5czdz" event={"ID":"a0ddc210-ca29-42e4-a4c2-a07881434fed","Type":"ContainerStarted","Data":"84b1d07b4bf4f3802ebc525ff6cc420874569362f03db8681190e265d36844f9"} Jan 23 14:20:39 crc kubenswrapper[4775]: I0123 14:20:39.776161 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-5czdz" podStartSLOduration=2.716825343 podStartE2EDuration="2.776140464s" podCreationTimestamp="2026-01-23 14:20:37 +0000 UTC" firstStartedPulling="2026-01-23 14:20:38.554654678 +0000 UTC m=+985.549483418" lastFinishedPulling="2026-01-23 14:20:38.613969799 +0000 UTC m=+985.608798539" observedRunningTime="2026-01-23 14:20:39.76875237 +0000 UTC m=+986.763581110" watchObservedRunningTime="2026-01-23 14:20:39.776140464 +0000 UTC m=+986.770969214" Jan 23 14:20:48 crc kubenswrapper[4775]: I0123 14:20:48.307144 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-5czdz" Jan 23 14:20:48 crc kubenswrapper[4775]: I0123 14:20:48.307850 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-5czdz" Jan 23 14:20:48 crc kubenswrapper[4775]: I0123 14:20:48.345148 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-5czdz" Jan 23 14:20:48 crc kubenswrapper[4775]: I0123 14:20:48.847183 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-5czdz" Jan 23 14:20:55 crc kubenswrapper[4775]: I0123 14:20:55.124758 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt"] Jan 23 14:20:55 crc kubenswrapper[4775]: E0123 14:20:55.126038 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24d6c5ef-17e4-48d8-ab0c-d5909563e217" containerName="registry-server" Jan 23 14:20:55 crc kubenswrapper[4775]: I0123 14:20:55.126072 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="24d6c5ef-17e4-48d8-ab0c-d5909563e217" containerName="registry-server" Jan 23 14:20:55 crc kubenswrapper[4775]: I0123 14:20:55.126374 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="24d6c5ef-17e4-48d8-ab0c-d5909563e217" containerName="registry-server" Jan 23 14:20:55 crc kubenswrapper[4775]: I0123 14:20:55.128535 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt" Jan 23 14:20:55 crc kubenswrapper[4775]: I0123 14:20:55.131966 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-nklzs" Jan 23 14:20:55 crc kubenswrapper[4775]: I0123 14:20:55.135391 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt"] Jan 23 14:20:55 crc kubenswrapper[4775]: I0123 14:20:55.211130 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/100f3a0b-4d11-495f-a6fe-57b196820ee3-util\") pod \"0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt\" (UID: \"100f3a0b-4d11-495f-a6fe-57b196820ee3\") " pod="openstack-operators/0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt" Jan 23 14:20:55 crc kubenswrapper[4775]: I0123 14:20:55.211236 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/100f3a0b-4d11-495f-a6fe-57b196820ee3-bundle\") pod \"0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt\" (UID: \"100f3a0b-4d11-495f-a6fe-57b196820ee3\") " pod="openstack-operators/0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt" Jan 23 14:20:55 crc kubenswrapper[4775]: I0123 14:20:55.211291 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plqcc\" (UniqueName: \"kubernetes.io/projected/100f3a0b-4d11-495f-a6fe-57b196820ee3-kube-api-access-plqcc\") pod \"0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt\" (UID: \"100f3a0b-4d11-495f-a6fe-57b196820ee3\") " pod="openstack-operators/0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt" Jan 23 14:20:55 crc kubenswrapper[4775]: I0123 14:20:55.312706 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/100f3a0b-4d11-495f-a6fe-57b196820ee3-bundle\") pod \"0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt\" (UID: \"100f3a0b-4d11-495f-a6fe-57b196820ee3\") " pod="openstack-operators/0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt" Jan 23 14:20:55 crc kubenswrapper[4775]: I0123 14:20:55.312755 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plqcc\" (UniqueName: \"kubernetes.io/projected/100f3a0b-4d11-495f-a6fe-57b196820ee3-kube-api-access-plqcc\") pod \"0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt\" (UID: \"100f3a0b-4d11-495f-a6fe-57b196820ee3\") " pod="openstack-operators/0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt" Jan 23 14:20:55 crc kubenswrapper[4775]: I0123 14:20:55.312838 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/100f3a0b-4d11-495f-a6fe-57b196820ee3-util\") pod \"0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt\" (UID: \"100f3a0b-4d11-495f-a6fe-57b196820ee3\") " pod="openstack-operators/0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt" Jan 23 14:20:55 crc kubenswrapper[4775]: I0123 14:20:55.313193 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/100f3a0b-4d11-495f-a6fe-57b196820ee3-bundle\") pod \"0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt\" (UID: \"100f3a0b-4d11-495f-a6fe-57b196820ee3\") " pod="openstack-operators/0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt" Jan 23 14:20:55 crc kubenswrapper[4775]: I0123 14:20:55.313230 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/100f3a0b-4d11-495f-a6fe-57b196820ee3-util\") pod \"0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt\" (UID: \"100f3a0b-4d11-495f-a6fe-57b196820ee3\") " pod="openstack-operators/0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt" Jan 23 14:20:55 crc kubenswrapper[4775]: I0123 14:20:55.329912 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plqcc\" (UniqueName: \"kubernetes.io/projected/100f3a0b-4d11-495f-a6fe-57b196820ee3-kube-api-access-plqcc\") pod \"0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt\" (UID: \"100f3a0b-4d11-495f-a6fe-57b196820ee3\") " pod="openstack-operators/0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt" Jan 23 14:20:55 crc kubenswrapper[4775]: I0123 14:20:55.459878 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt" Jan 23 14:20:55 crc kubenswrapper[4775]: I0123 14:20:55.737497 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt"] Jan 23 14:20:55 crc kubenswrapper[4775]: I0123 14:20:55.883356 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt" event={"ID":"100f3a0b-4d11-495f-a6fe-57b196820ee3","Type":"ContainerStarted","Data":"86f5ecabdbde55812ad0e083a9973bb49d18923227d9246edb8e853ffbbb41fb"} Jan 23 14:20:56 crc kubenswrapper[4775]: I0123 14:20:56.894143 4775 generic.go:334] "Generic (PLEG): container finished" podID="100f3a0b-4d11-495f-a6fe-57b196820ee3" containerID="748926d979307840df862b3e8f8b2a8902ec143e560c1137489f7bd552663379" exitCode=0 Jan 23 14:20:56 crc kubenswrapper[4775]: I0123 14:20:56.894228 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt" event={"ID":"100f3a0b-4d11-495f-a6fe-57b196820ee3","Type":"ContainerDied","Data":"748926d979307840df862b3e8f8b2a8902ec143e560c1137489f7bd552663379"} Jan 23 14:20:57 crc kubenswrapper[4775]: I0123 14:20:57.903616 4775 generic.go:334] "Generic (PLEG): container finished" podID="100f3a0b-4d11-495f-a6fe-57b196820ee3" containerID="90697243af1d8793a0de6b6bb13bd408648462fe0026e927bf20ccdddfadab9e" exitCode=0 Jan 23 14:20:57 crc kubenswrapper[4775]: I0123 14:20:57.904010 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt" event={"ID":"100f3a0b-4d11-495f-a6fe-57b196820ee3","Type":"ContainerDied","Data":"90697243af1d8793a0de6b6bb13bd408648462fe0026e927bf20ccdddfadab9e"} Jan 23 14:20:58 crc kubenswrapper[4775]: I0123 14:20:58.915645 4775 generic.go:334] "Generic (PLEG): container finished" podID="100f3a0b-4d11-495f-a6fe-57b196820ee3" containerID="e7f9c451ab98994cd8a9da6ff9f29bbbd0ecc9fe81daeede319592422077c4c5" exitCode=0 Jan 23 14:20:58 crc kubenswrapper[4775]: I0123 14:20:58.915762 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt" event={"ID":"100f3a0b-4d11-495f-a6fe-57b196820ee3","Type":"ContainerDied","Data":"e7f9c451ab98994cd8a9da6ff9f29bbbd0ecc9fe81daeede319592422077c4c5"} Jan 23 14:21:00 crc kubenswrapper[4775]: I0123 14:21:00.276368 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt" Jan 23 14:21:00 crc kubenswrapper[4775]: I0123 14:21:00.391706 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plqcc\" (UniqueName: \"kubernetes.io/projected/100f3a0b-4d11-495f-a6fe-57b196820ee3-kube-api-access-plqcc\") pod \"100f3a0b-4d11-495f-a6fe-57b196820ee3\" (UID: \"100f3a0b-4d11-495f-a6fe-57b196820ee3\") " Jan 23 14:21:00 crc kubenswrapper[4775]: I0123 14:21:00.391883 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/100f3a0b-4d11-495f-a6fe-57b196820ee3-bundle\") pod \"100f3a0b-4d11-495f-a6fe-57b196820ee3\" (UID: \"100f3a0b-4d11-495f-a6fe-57b196820ee3\") " Jan 23 14:21:00 crc kubenswrapper[4775]: I0123 14:21:00.391949 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/100f3a0b-4d11-495f-a6fe-57b196820ee3-util\") pod \"100f3a0b-4d11-495f-a6fe-57b196820ee3\" (UID: \"100f3a0b-4d11-495f-a6fe-57b196820ee3\") " Jan 23 14:21:00 crc kubenswrapper[4775]: I0123 14:21:00.393619 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/100f3a0b-4d11-495f-a6fe-57b196820ee3-bundle" (OuterVolumeSpecName: "bundle") pod "100f3a0b-4d11-495f-a6fe-57b196820ee3" (UID: "100f3a0b-4d11-495f-a6fe-57b196820ee3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:21:00 crc kubenswrapper[4775]: I0123 14:21:00.400531 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/100f3a0b-4d11-495f-a6fe-57b196820ee3-kube-api-access-plqcc" (OuterVolumeSpecName: "kube-api-access-plqcc") pod "100f3a0b-4d11-495f-a6fe-57b196820ee3" (UID: "100f3a0b-4d11-495f-a6fe-57b196820ee3"). InnerVolumeSpecName "kube-api-access-plqcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:21:00 crc kubenswrapper[4775]: I0123 14:21:00.425797 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/100f3a0b-4d11-495f-a6fe-57b196820ee3-util" (OuterVolumeSpecName: "util") pod "100f3a0b-4d11-495f-a6fe-57b196820ee3" (UID: "100f3a0b-4d11-495f-a6fe-57b196820ee3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:21:00 crc kubenswrapper[4775]: I0123 14:21:00.493511 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plqcc\" (UniqueName: \"kubernetes.io/projected/100f3a0b-4d11-495f-a6fe-57b196820ee3-kube-api-access-plqcc\") on node \"crc\" DevicePath \"\"" Jan 23 14:21:00 crc kubenswrapper[4775]: I0123 14:21:00.493647 4775 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/100f3a0b-4d11-495f-a6fe-57b196820ee3-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:21:00 crc kubenswrapper[4775]: I0123 14:21:00.493697 4775 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/100f3a0b-4d11-495f-a6fe-57b196820ee3-util\") on node \"crc\" DevicePath \"\"" Jan 23 14:21:00 crc kubenswrapper[4775]: I0123 14:21:00.938662 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt" event={"ID":"100f3a0b-4d11-495f-a6fe-57b196820ee3","Type":"ContainerDied","Data":"86f5ecabdbde55812ad0e083a9973bb49d18923227d9246edb8e853ffbbb41fb"} Jan 23 14:21:00 crc kubenswrapper[4775]: I0123 14:21:00.939098 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86f5ecabdbde55812ad0e083a9973bb49d18923227d9246edb8e853ffbbb41fb" Jan 23 14:21:00 crc kubenswrapper[4775]: I0123 14:21:00.938764 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt" Jan 23 14:21:07 crc kubenswrapper[4775]: I0123 14:21:07.826062 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-86f7b68b5c-stl6w"] Jan 23 14:21:07 crc kubenswrapper[4775]: E0123 14:21:07.827448 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="100f3a0b-4d11-495f-a6fe-57b196820ee3" containerName="pull" Jan 23 14:21:07 crc kubenswrapper[4775]: I0123 14:21:07.827482 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="100f3a0b-4d11-495f-a6fe-57b196820ee3" containerName="pull" Jan 23 14:21:07 crc kubenswrapper[4775]: E0123 14:21:07.827511 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="100f3a0b-4d11-495f-a6fe-57b196820ee3" containerName="util" Jan 23 14:21:07 crc kubenswrapper[4775]: I0123 14:21:07.827530 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="100f3a0b-4d11-495f-a6fe-57b196820ee3" containerName="util" Jan 23 14:21:07 crc kubenswrapper[4775]: E0123 14:21:07.827562 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="100f3a0b-4d11-495f-a6fe-57b196820ee3" containerName="extract" Jan 23 14:21:07 crc kubenswrapper[4775]: I0123 14:21:07.827580 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="100f3a0b-4d11-495f-a6fe-57b196820ee3" containerName="extract" Jan 23 14:21:07 crc kubenswrapper[4775]: I0123 14:21:07.827877 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="100f3a0b-4d11-495f-a6fe-57b196820ee3" containerName="extract" Jan 23 14:21:07 crc kubenswrapper[4775]: I0123 14:21:07.828747 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-86f7b68b5c-stl6w" Jan 23 14:21:07 crc kubenswrapper[4775]: I0123 14:21:07.830858 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-nbgb2" Jan 23 14:21:07 crc kubenswrapper[4775]: I0123 14:21:07.854858 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-86f7b68b5c-stl6w"] Jan 23 14:21:08 crc kubenswrapper[4775]: I0123 14:21:08.008465 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnvz4\" (UniqueName: \"kubernetes.io/projected/355da547-d965-4754-8730-b9c8a20fd930-kube-api-access-qnvz4\") pod \"openstack-operator-controller-init-86f7b68b5c-stl6w\" (UID: \"355da547-d965-4754-8730-b9c8a20fd930\") " pod="openstack-operators/openstack-operator-controller-init-86f7b68b5c-stl6w" Jan 23 14:21:08 crc kubenswrapper[4775]: I0123 14:21:08.109819 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnvz4\" (UniqueName: \"kubernetes.io/projected/355da547-d965-4754-8730-b9c8a20fd930-kube-api-access-qnvz4\") pod \"openstack-operator-controller-init-86f7b68b5c-stl6w\" (UID: \"355da547-d965-4754-8730-b9c8a20fd930\") " pod="openstack-operators/openstack-operator-controller-init-86f7b68b5c-stl6w" Jan 23 14:21:08 crc kubenswrapper[4775]: I0123 14:21:08.148880 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnvz4\" (UniqueName: \"kubernetes.io/projected/355da547-d965-4754-8730-b9c8a20fd930-kube-api-access-qnvz4\") pod \"openstack-operator-controller-init-86f7b68b5c-stl6w\" (UID: \"355da547-d965-4754-8730-b9c8a20fd930\") " pod="openstack-operators/openstack-operator-controller-init-86f7b68b5c-stl6w" Jan 23 14:21:08 crc kubenswrapper[4775]: I0123 14:21:08.150568 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-86f7b68b5c-stl6w" Jan 23 14:21:08 crc kubenswrapper[4775]: I0123 14:21:08.687542 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-86f7b68b5c-stl6w"] Jan 23 14:21:08 crc kubenswrapper[4775]: I0123 14:21:08.995523 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-86f7b68b5c-stl6w" event={"ID":"355da547-d965-4754-8730-b9c8a20fd930","Type":"ContainerStarted","Data":"9226d2ede7beb9208ad931c1d54e8ae0eea8cc9501e5c82efcf4ccfa1586382e"} Jan 23 14:21:14 crc kubenswrapper[4775]: I0123 14:21:14.051254 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-86f7b68b5c-stl6w" event={"ID":"355da547-d965-4754-8730-b9c8a20fd930","Type":"ContainerStarted","Data":"29bef9650740f55bafd48157808e3591f52eafd13be1ee85e76f5102a8d9c94d"} Jan 23 14:21:14 crc kubenswrapper[4775]: I0123 14:21:14.052106 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-86f7b68b5c-stl6w" Jan 23 14:21:14 crc kubenswrapper[4775]: I0123 14:21:14.101662 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-86f7b68b5c-stl6w" podStartSLOduration=2.707238648 podStartE2EDuration="7.101642534s" podCreationTimestamp="2026-01-23 14:21:07 +0000 UTC" firstStartedPulling="2026-01-23 14:21:08.704331155 +0000 UTC m=+1015.699159925" lastFinishedPulling="2026-01-23 14:21:13.098735061 +0000 UTC m=+1020.093563811" observedRunningTime="2026-01-23 14:21:14.089559853 +0000 UTC m=+1021.084388623" watchObservedRunningTime="2026-01-23 14:21:14.101642534 +0000 UTC m=+1021.096471284" Jan 23 14:21:18 crc kubenswrapper[4775]: I0123 14:21:18.154793 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-86f7b68b5c-stl6w" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.269305 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-pk9jd"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.270766 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-pk9jd" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.272681 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-mdshv" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.280032 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-dz7ft"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.281630 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-dz7ft" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.283692 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-9cst5" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.287944 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkts6\" (UniqueName: \"kubernetes.io/projected/56ee00d0-c0f0-442a-bf4a-7335b62c1c4e-kube-api-access-mkts6\") pod \"barbican-operator-controller-manager-7f86f8796f-pk9jd\" (UID: \"56ee00d0-c0f0-442a-bf4a-7335b62c1c4e\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-pk9jd" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.291963 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-pk9jd"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.307596 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-ppxmc"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.314337 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-ppxmc" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.321211 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-g8vfs" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.334955 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-dz7ft"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.358664 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-ppxmc"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.380551 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-xrmvt"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.381280 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-xrmvt" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.382934 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-6nb6c" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.385148 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-jq89z"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.385699 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-jq89z" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.388395 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-cp4vx" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.392296 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdmkp\" (UniqueName: \"kubernetes.io/projected/9ce79c2a-2c52-48de-80a6-887d592578d3-kube-api-access-xdmkp\") pod \"cinder-operator-controller-manager-69cf5d4557-dz7ft\" (UID: \"9ce79c2a-2c52-48de-80a6-887d592578d3\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-dz7ft" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.392384 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvk75\" (UniqueName: \"kubernetes.io/projected/352223d5-fa0a-43df-8bad-0eaa9b6b439d-kube-api-access-xvk75\") pod \"designate-operator-controller-manager-b45d7bf98-ppxmc\" (UID: \"352223d5-fa0a-43df-8bad-0eaa9b6b439d\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-ppxmc" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.392414 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkts6\" (UniqueName: \"kubernetes.io/projected/56ee00d0-c0f0-442a-bf4a-7335b62c1c4e-kube-api-access-mkts6\") pod \"barbican-operator-controller-manager-7f86f8796f-pk9jd\" (UID: \"56ee00d0-c0f0-442a-bf4a-7335b62c1c4e\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-pk9jd" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.405852 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-xrmvt"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.410692 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-jq89z"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.425936 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sg9x5"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.426698 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sg9x5" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.429281 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-rgcdb" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.437521 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkts6\" (UniqueName: \"kubernetes.io/projected/56ee00d0-c0f0-442a-bf4a-7335b62c1c4e-kube-api-access-mkts6\") pod \"barbican-operator-controller-manager-7f86f8796f-pk9jd\" (UID: \"56ee00d0-c0f0-442a-bf4a-7335b62c1c4e\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-pk9jd" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.452871 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sg9x5"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.455913 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-58749ffdfb-mcrj4"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.456603 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-mcrj4" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.461352 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.461594 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-jgkdm" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.467206 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-f7lm6"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.467922 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-f7lm6" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.471408 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-nljzc" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.484141 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-58749ffdfb-mcrj4"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.493782 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44lzg\" (UniqueName: \"kubernetes.io/projected/d98bebb2-a42a-45a6-b452-a82ce1f62896-kube-api-access-44lzg\") pod \"ironic-operator-controller-manager-598f7747c9-f7lm6\" (UID: \"d98bebb2-a42a-45a6-b452-a82ce1f62896\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-f7lm6" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.493831 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg8zq\" (UniqueName: \"kubernetes.io/projected/64bae0eb-d703-4058-a545-b42d62045b90-kube-api-access-cg8zq\") pod \"glance-operator-controller-manager-78fdd796fd-jq89z\" (UID: \"64bae0eb-d703-4058-a545-b42d62045b90\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-jq89z" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.493855 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d67l\" (UniqueName: \"kubernetes.io/projected/841fb528-61a8-445e-a135-be26295bc975-kube-api-access-4d67l\") pod \"heat-operator-controller-manager-594c8c9d5d-xrmvt\" (UID: \"841fb528-61a8-445e-a135-be26295bc975\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-xrmvt" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.493892 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvk75\" (UniqueName: \"kubernetes.io/projected/352223d5-fa0a-43df-8bad-0eaa9b6b439d-kube-api-access-xvk75\") pod \"designate-operator-controller-manager-b45d7bf98-ppxmc\" (UID: \"352223d5-fa0a-43df-8bad-0eaa9b6b439d\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-ppxmc" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.493919 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5klj\" (UniqueName: \"kubernetes.io/projected/d9e69fcf-58c9-45fe-a291-4628c8219e10-kube-api-access-z5klj\") pod \"horizon-operator-controller-manager-77d5c5b54f-sg9x5\" (UID: \"d9e69fcf-58c9-45fe-a291-4628c8219e10\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sg9x5" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.493940 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a65a9ef-28c7-46ae-826d-5546af1103a5-cert\") pod \"infra-operator-controller-manager-58749ffdfb-mcrj4\" (UID: \"5a65a9ef-28c7-46ae-826d-5546af1103a5\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-mcrj4" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.493981 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh5df\" (UniqueName: \"kubernetes.io/projected/5a65a9ef-28c7-46ae-826d-5546af1103a5-kube-api-access-bh5df\") pod \"infra-operator-controller-manager-58749ffdfb-mcrj4\" (UID: \"5a65a9ef-28c7-46ae-826d-5546af1103a5\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-mcrj4" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.494000 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdmkp\" (UniqueName: \"kubernetes.io/projected/9ce79c2a-2c52-48de-80a6-887d592578d3-kube-api-access-xdmkp\") pod \"cinder-operator-controller-manager-69cf5d4557-dz7ft\" (UID: \"9ce79c2a-2c52-48de-80a6-887d592578d3\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-dz7ft" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.500675 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-f7lm6"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.524042 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-bgbpj"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.524792 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bgbpj" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.526243 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvk75\" (UniqueName: \"kubernetes.io/projected/352223d5-fa0a-43df-8bad-0eaa9b6b439d-kube-api-access-xvk75\") pod \"designate-operator-controller-manager-b45d7bf98-ppxmc\" (UID: \"352223d5-fa0a-43df-8bad-0eaa9b6b439d\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-ppxmc" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.526726 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-db6n4" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.532278 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdmkp\" (UniqueName: \"kubernetes.io/projected/9ce79c2a-2c52-48de-80a6-887d592578d3-kube-api-access-xdmkp\") pod \"cinder-operator-controller-manager-69cf5d4557-dz7ft\" (UID: \"9ce79c2a-2c52-48de-80a6-887d592578d3\") " pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-dz7ft" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.553726 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-pfdc5"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.554552 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-pfdc5" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.558105 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-mbbh9" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.570105 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-bgbpj"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.582865 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-pfdc5"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.589678 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-jk8vg"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.590546 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-jk8vg" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.592528 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-99hm6" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.599148 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44lzg\" (UniqueName: \"kubernetes.io/projected/d98bebb2-a42a-45a6-b452-a82ce1f62896-kube-api-access-44lzg\") pod \"ironic-operator-controller-manager-598f7747c9-f7lm6\" (UID: \"d98bebb2-a42a-45a6-b452-a82ce1f62896\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-f7lm6" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.599207 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg8zq\" (UniqueName: \"kubernetes.io/projected/64bae0eb-d703-4058-a545-b42d62045b90-kube-api-access-cg8zq\") pod \"glance-operator-controller-manager-78fdd796fd-jq89z\" (UID: \"64bae0eb-d703-4058-a545-b42d62045b90\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-jq89z" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.599237 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d67l\" (UniqueName: \"kubernetes.io/projected/841fb528-61a8-445e-a135-be26295bc975-kube-api-access-4d67l\") pod \"heat-operator-controller-manager-594c8c9d5d-xrmvt\" (UID: \"841fb528-61a8-445e-a135-be26295bc975\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-xrmvt" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.599279 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfgdr\" (UniqueName: \"kubernetes.io/projected/853c6152-25bf-4374-a941-f9cd4202c87f-kube-api-access-pfgdr\") pod \"manila-operator-controller-manager-78c6999f6f-pfdc5\" (UID: \"853c6152-25bf-4374-a941-f9cd4202c87f\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-pfdc5" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.599320 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5klj\" (UniqueName: \"kubernetes.io/projected/d9e69fcf-58c9-45fe-a291-4628c8219e10-kube-api-access-z5klj\") pod \"horizon-operator-controller-manager-77d5c5b54f-sg9x5\" (UID: \"d9e69fcf-58c9-45fe-a291-4628c8219e10\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sg9x5" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.599353 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a65a9ef-28c7-46ae-826d-5546af1103a5-cert\") pod \"infra-operator-controller-manager-58749ffdfb-mcrj4\" (UID: \"5a65a9ef-28c7-46ae-826d-5546af1103a5\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-mcrj4" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.599397 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th6g4\" (UniqueName: \"kubernetes.io/projected/0784c928-e0c5-4afb-99cb-4f1f96820a14-kube-api-access-th6g4\") pod \"keystone-operator-controller-manager-b8b6d4659-bgbpj\" (UID: \"0784c928-e0c5-4afb-99cb-4f1f96820a14\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bgbpj" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.599438 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh5df\" (UniqueName: \"kubernetes.io/projected/5a65a9ef-28c7-46ae-826d-5546af1103a5-kube-api-access-bh5df\") pod \"infra-operator-controller-manager-58749ffdfb-mcrj4\" (UID: \"5a65a9ef-28c7-46ae-826d-5546af1103a5\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-mcrj4" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.599970 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-pk9jd" Jan 23 14:21:39 crc kubenswrapper[4775]: E0123 14:21:39.600692 4775 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 14:21:39 crc kubenswrapper[4775]: E0123 14:21:39.600727 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a65a9ef-28c7-46ae-826d-5546af1103a5-cert podName:5a65a9ef-28c7-46ae-826d-5546af1103a5 nodeName:}" failed. No retries permitted until 2026-01-23 14:21:40.100713107 +0000 UTC m=+1047.095541847 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5a65a9ef-28c7-46ae-826d-5546af1103a5-cert") pod "infra-operator-controller-manager-58749ffdfb-mcrj4" (UID: "5a65a9ef-28c7-46ae-826d-5546af1103a5") : secret "infra-operator-webhook-server-cert" not found Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.605215 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-jk8vg"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.614678 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-sxkzh"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.615496 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sxkzh" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.617975 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-mgtgs" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.620243 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-dz7ft" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.625647 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-d9495b985-k98mk"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.626737 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-d9495b985-k98mk" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.630163 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-mh2wz" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.630477 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d67l\" (UniqueName: \"kubernetes.io/projected/841fb528-61a8-445e-a135-be26295bc975-kube-api-access-4d67l\") pod \"heat-operator-controller-manager-594c8c9d5d-xrmvt\" (UID: \"841fb528-61a8-445e-a135-be26295bc975\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-xrmvt" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.636151 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh5df\" (UniqueName: \"kubernetes.io/projected/5a65a9ef-28c7-46ae-826d-5546af1103a5-kube-api-access-bh5df\") pod \"infra-operator-controller-manager-58749ffdfb-mcrj4\" (UID: \"5a65a9ef-28c7-46ae-826d-5546af1103a5\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-mcrj4" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.636653 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5klj\" (UniqueName: \"kubernetes.io/projected/d9e69fcf-58c9-45fe-a291-4628c8219e10-kube-api-access-z5klj\") pod \"horizon-operator-controller-manager-77d5c5b54f-sg9x5\" (UID: \"d9e69fcf-58c9-45fe-a291-4628c8219e10\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sg9x5" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.641728 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-sxkzh"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.643045 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44lzg\" (UniqueName: \"kubernetes.io/projected/d98bebb2-a42a-45a6-b452-a82ce1f62896-kube-api-access-44lzg\") pod \"ironic-operator-controller-manager-598f7747c9-f7lm6\" (UID: \"d98bebb2-a42a-45a6-b452-a82ce1f62896\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-f7lm6" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.645410 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-d9495b985-k98mk"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.647707 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-ppxmc" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.660325 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg8zq\" (UniqueName: \"kubernetes.io/projected/64bae0eb-d703-4058-a545-b42d62045b90-kube-api-access-cg8zq\") pod \"glance-operator-controller-manager-78fdd796fd-jq89z\" (UID: \"64bae0eb-d703-4058-a545-b42d62045b90\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-jq89z" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.691040 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-vl7m5"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.691793 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-vl7m5" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.696173 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-vl7m5"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.697382 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-dg9zv" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.700790 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fktnr\" (UniqueName: \"kubernetes.io/projected/9710b785-e422-4aca-88e8-e88d26d4e724-kube-api-access-fktnr\") pod \"neutron-operator-controller-manager-78d58447c5-sxkzh\" (UID: \"9710b785-e422-4aca-88e8-e88d26d4e724\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sxkzh" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.700864 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th6g4\" (UniqueName: \"kubernetes.io/projected/0784c928-e0c5-4afb-99cb-4f1f96820a14-kube-api-access-th6g4\") pod \"keystone-operator-controller-manager-b8b6d4659-bgbpj\" (UID: \"0784c928-e0c5-4afb-99cb-4f1f96820a14\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bgbpj" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.700894 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gskfg\" (UniqueName: \"kubernetes.io/projected/9bad88d6-5ca9-4176-904d-72b793e1361e-kube-api-access-gskfg\") pod \"nova-operator-controller-manager-d9495b985-k98mk\" (UID: \"9bad88d6-5ca9-4176-904d-72b793e1361e\") " pod="openstack-operators/nova-operator-controller-manager-d9495b985-k98mk" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.700928 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc5pl\" (UniqueName: \"kubernetes.io/projected/bb6ce8ae-8d3f-4988-9386-6a20487f8ae9-kube-api-access-qc5pl\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-jk8vg\" (UID: \"bb6ce8ae-8d3f-4988-9386-6a20487f8ae9\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-jk8vg" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.700967 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfgdr\" (UniqueName: \"kubernetes.io/projected/853c6152-25bf-4374-a941-f9cd4202c87f-kube-api-access-pfgdr\") pod \"manila-operator-controller-manager-78c6999f6f-pfdc5\" (UID: \"853c6152-25bf-4374-a941-f9cd4202c87f\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-pfdc5" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.720415 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfgdr\" (UniqueName: \"kubernetes.io/projected/853c6152-25bf-4374-a941-f9cd4202c87f-kube-api-access-pfgdr\") pod \"manila-operator-controller-manager-78c6999f6f-pfdc5\" (UID: \"853c6152-25bf-4374-a941-f9cd4202c87f\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-pfdc5" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.721504 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-xrmvt" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.722631 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th6g4\" (UniqueName: \"kubernetes.io/projected/0784c928-e0c5-4afb-99cb-4f1f96820a14-kube-api-access-th6g4\") pod \"keystone-operator-controller-manager-b8b6d4659-bgbpj\" (UID: \"0784c928-e0c5-4afb-99cb-4f1f96820a14\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bgbpj" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.733537 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854zk48c"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.744847 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.754664 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.758547 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-ktsgf" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.762055 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-jq89z" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.785016 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sg9x5" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.804394 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-xst4r"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.806596 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fktnr\" (UniqueName: \"kubernetes.io/projected/9710b785-e422-4aca-88e8-e88d26d4e724-kube-api-access-fktnr\") pod \"neutron-operator-controller-manager-78d58447c5-sxkzh\" (UID: \"9710b785-e422-4aca-88e8-e88d26d4e724\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sxkzh" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.806650 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frhxs\" (UniqueName: \"kubernetes.io/projected/a07598ff-60cc-482e-a551-af751575709c-kube-api-access-frhxs\") pod \"octavia-operator-controller-manager-7bd9774b6-vl7m5\" (UID: \"a07598ff-60cc-482e-a551-af751575709c\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-vl7m5" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.806688 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gskfg\" (UniqueName: \"kubernetes.io/projected/9bad88d6-5ca9-4176-904d-72b793e1361e-kube-api-access-gskfg\") pod \"nova-operator-controller-manager-d9495b985-k98mk\" (UID: \"9bad88d6-5ca9-4176-904d-72b793e1361e\") " pod="openstack-operators/nova-operator-controller-manager-d9495b985-k98mk" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.806725 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvdr7\" (UniqueName: \"kubernetes.io/projected/44a963d8-d403-42d5-acd2-a0379f07db51-kube-api-access-dvdr7\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854zk48c\" (UID: \"44a963d8-d403-42d5-acd2-a0379f07db51\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.806758 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc5pl\" (UniqueName: \"kubernetes.io/projected/bb6ce8ae-8d3f-4988-9386-6a20487f8ae9-kube-api-access-qc5pl\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-jk8vg\" (UID: \"bb6ce8ae-8d3f-4988-9386-6a20487f8ae9\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-jk8vg" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.806773 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/44a963d8-d403-42d5-acd2-a0379f07db51-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854zk48c\" (UID: \"44a963d8-d403-42d5-acd2-a0379f07db51\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.809488 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854zk48c"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.809521 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-n4k5s"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.809625 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xst4r" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.811365 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-c4pmt" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.811371 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-xst4r"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.811558 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-nqw74"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.812278 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-n4k5s" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.812649 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-n4k5s"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.812670 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-nqw74"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.812681 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-jrhlh"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.813021 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-nqw74" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.813281 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-jrhlh" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.815028 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-lswqj" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.815503 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-bjttx" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.815714 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-rgpzh" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.819043 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-jrhlh"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.829462 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc5pl\" (UniqueName: \"kubernetes.io/projected/bb6ce8ae-8d3f-4988-9386-6a20487f8ae9-kube-api-access-qc5pl\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-jk8vg\" (UID: \"bb6ce8ae-8d3f-4988-9386-6a20487f8ae9\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-jk8vg" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.832196 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gskfg\" (UniqueName: \"kubernetes.io/projected/9bad88d6-5ca9-4176-904d-72b793e1361e-kube-api-access-gskfg\") pod \"nova-operator-controller-manager-d9495b985-k98mk\" (UID: \"9bad88d6-5ca9-4176-904d-72b793e1361e\") " pod="openstack-operators/nova-operator-controller-manager-d9495b985-k98mk" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.833266 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-f7lm6" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.840436 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fktnr\" (UniqueName: \"kubernetes.io/projected/9710b785-e422-4aca-88e8-e88d26d4e724-kube-api-access-fktnr\") pod \"neutron-operator-controller-manager-78d58447c5-sxkzh\" (UID: \"9710b785-e422-4aca-88e8-e88d26d4e724\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sxkzh" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.846020 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-xtmz8"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.847091 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xtmz8" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.854552 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-vfzg6" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.867564 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-xtmz8"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.883548 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bgbpj" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.898459 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-pfdc5" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.908648 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6d9458688d-v8dw9"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.910319 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-v8dw9" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.913225 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-7jvgx" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.913608 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6d9458688d-v8dw9"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.914216 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj998\" (UniqueName: \"kubernetes.io/projected/9f9597bf-12a1-4204-ac57-37c4c0189687-kube-api-access-lj998\") pod \"test-operator-controller-manager-69797bbcbd-xtmz8\" (UID: \"9f9597bf-12a1-4204-ac57-37c4c0189687\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xtmz8" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.914256 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frhxs\" (UniqueName: \"kubernetes.io/projected/a07598ff-60cc-482e-a551-af751575709c-kube-api-access-frhxs\") pod \"octavia-operator-controller-manager-7bd9774b6-vl7m5\" (UID: \"a07598ff-60cc-482e-a551-af751575709c\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-vl7m5" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.914278 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87fmw\" (UniqueName: \"kubernetes.io/projected/91da96b4-921a-4b88-9804-55745989e08b-kube-api-access-87fmw\") pod \"telemetry-operator-controller-manager-85cd9769bb-jrhlh\" (UID: \"91da96b4-921a-4b88-9804-55745989e08b\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-jrhlh" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.914312 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmtrg\" (UniqueName: \"kubernetes.io/projected/3d7c7bc6-5124-4cd4-a406-448ca94ba640-kube-api-access-rmtrg\") pod \"ovn-operator-controller-manager-55db956ddc-xst4r\" (UID: \"3d7c7bc6-5124-4cd4-a406-448ca94ba640\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xst4r" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.914337 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srgq4\" (UniqueName: \"kubernetes.io/projected/072b9a9d-8a08-454c-b1b6-628fcdcc91df-kube-api-access-srgq4\") pod \"placement-operator-controller-manager-5d646b7d76-n4k5s\" (UID: \"072b9a9d-8a08-454c-b1b6-628fcdcc91df\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-n4k5s" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.914359 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvdr7\" (UniqueName: \"kubernetes.io/projected/44a963d8-d403-42d5-acd2-a0379f07db51-kube-api-access-dvdr7\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854zk48c\" (UID: \"44a963d8-d403-42d5-acd2-a0379f07db51\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.914380 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/44a963d8-d403-42d5-acd2-a0379f07db51-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854zk48c\" (UID: \"44a963d8-d403-42d5-acd2-a0379f07db51\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.914421 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96z89\" (UniqueName: \"kubernetes.io/projected/ecef6080-ea2c-43f4-8ffa-da2ceb59369d-kube-api-access-96z89\") pod \"swift-operator-controller-manager-547cbdb99f-nqw74\" (UID: \"ecef6080-ea2c-43f4-8ffa-da2ceb59369d\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-nqw74" Jan 23 14:21:39 crc kubenswrapper[4775]: E0123 14:21:39.914685 4775 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 14:21:39 crc kubenswrapper[4775]: E0123 14:21:39.914739 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44a963d8-d403-42d5-acd2-a0379f07db51-cert podName:44a963d8-d403-42d5-acd2-a0379f07db51 nodeName:}" failed. No retries permitted until 2026-01-23 14:21:40.414724441 +0000 UTC m=+1047.409553181 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/44a963d8-d403-42d5-acd2-a0379f07db51-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" (UID: "44a963d8-d403-42d5-acd2-a0379f07db51") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.940509 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvdr7\" (UniqueName: \"kubernetes.io/projected/44a963d8-d403-42d5-acd2-a0379f07db51-kube-api-access-dvdr7\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854zk48c\" (UID: \"44a963d8-d403-42d5-acd2-a0379f07db51\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.947545 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frhxs\" (UniqueName: \"kubernetes.io/projected/a07598ff-60cc-482e-a551-af751575709c-kube-api-access-frhxs\") pod \"octavia-operator-controller-manager-7bd9774b6-vl7m5\" (UID: \"a07598ff-60cc-482e-a551-af751575709c\") " pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-vl7m5" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.994415 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9"] Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.995478 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.999196 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.999412 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-fsvsf" Jan 23 14:21:39 crc kubenswrapper[4775]: I0123 14:21:39.999523 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.017608 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-jk8vg" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.018610 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srgq4\" (UniqueName: \"kubernetes.io/projected/072b9a9d-8a08-454c-b1b6-628fcdcc91df-kube-api-access-srgq4\") pod \"placement-operator-controller-manager-5d646b7d76-n4k5s\" (UID: \"072b9a9d-8a08-454c-b1b6-628fcdcc91df\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-n4k5s" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.018724 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96qrp\" (UniqueName: \"kubernetes.io/projected/272dcd84-1bb6-42cb-8c8e-6851f9f031de-kube-api-access-96qrp\") pod \"watcher-operator-controller-manager-6d9458688d-v8dw9\" (UID: \"272dcd84-1bb6-42cb-8c8e-6851f9f031de\") " pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-v8dw9" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.018865 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96z89\" (UniqueName: \"kubernetes.io/projected/ecef6080-ea2c-43f4-8ffa-da2ceb59369d-kube-api-access-96z89\") pod \"swift-operator-controller-manager-547cbdb99f-nqw74\" (UID: \"ecef6080-ea2c-43f4-8ffa-da2ceb59369d\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-nqw74" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.018969 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj998\" (UniqueName: \"kubernetes.io/projected/9f9597bf-12a1-4204-ac57-37c4c0189687-kube-api-access-lj998\") pod \"test-operator-controller-manager-69797bbcbd-xtmz8\" (UID: \"9f9597bf-12a1-4204-ac57-37c4c0189687\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xtmz8" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.019046 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87fmw\" (UniqueName: \"kubernetes.io/projected/91da96b4-921a-4b88-9804-55745989e08b-kube-api-access-87fmw\") pod \"telemetry-operator-controller-manager-85cd9769bb-jrhlh\" (UID: \"91da96b4-921a-4b88-9804-55745989e08b\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-jrhlh" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.019123 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmtrg\" (UniqueName: \"kubernetes.io/projected/3d7c7bc6-5124-4cd4-a406-448ca94ba640-kube-api-access-rmtrg\") pod \"ovn-operator-controller-manager-55db956ddc-xst4r\" (UID: \"3d7c7bc6-5124-4cd4-a406-448ca94ba640\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xst4r" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.036140 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sxkzh" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.036944 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9"] Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.046206 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87fmw\" (UniqueName: \"kubernetes.io/projected/91da96b4-921a-4b88-9804-55745989e08b-kube-api-access-87fmw\") pod \"telemetry-operator-controller-manager-85cd9769bb-jrhlh\" (UID: \"91da96b4-921a-4b88-9804-55745989e08b\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-jrhlh" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.087450 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96z89\" (UniqueName: \"kubernetes.io/projected/ecef6080-ea2c-43f4-8ffa-da2ceb59369d-kube-api-access-96z89\") pod \"swift-operator-controller-manager-547cbdb99f-nqw74\" (UID: \"ecef6080-ea2c-43f4-8ffa-da2ceb59369d\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-nqw74" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.087941 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj998\" (UniqueName: \"kubernetes.io/projected/9f9597bf-12a1-4204-ac57-37c4c0189687-kube-api-access-lj998\") pod \"test-operator-controller-manager-69797bbcbd-xtmz8\" (UID: \"9f9597bf-12a1-4204-ac57-37c4c0189687\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xtmz8" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.088363 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srgq4\" (UniqueName: \"kubernetes.io/projected/072b9a9d-8a08-454c-b1b6-628fcdcc91df-kube-api-access-srgq4\") pod \"placement-operator-controller-manager-5d646b7d76-n4k5s\" (UID: \"072b9a9d-8a08-454c-b1b6-628fcdcc91df\") " pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-n4k5s" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.103669 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xtmz8" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.106004 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2lhsf"] Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.115322 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2lhsf" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.127274 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-56whh" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.131267 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-d9495b985-k98mk" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.134386 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-webhook-certs\") pod \"openstack-operator-controller-manager-bb8f85db-bkqk9\" (UID: \"313b5382-60cf-4627-8ba7-a091fc457989\") " pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.134433 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x9hk\" (UniqueName: \"kubernetes.io/projected/313b5382-60cf-4627-8ba7-a091fc457989-kube-api-access-7x9hk\") pod \"openstack-operator-controller-manager-bb8f85db-bkqk9\" (UID: \"313b5382-60cf-4627-8ba7-a091fc457989\") " pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.134467 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a65a9ef-28c7-46ae-826d-5546af1103a5-cert\") pod \"infra-operator-controller-manager-58749ffdfb-mcrj4\" (UID: \"5a65a9ef-28c7-46ae-826d-5546af1103a5\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-mcrj4" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.134530 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96qrp\" (UniqueName: \"kubernetes.io/projected/272dcd84-1bb6-42cb-8c8e-6851f9f031de-kube-api-access-96qrp\") pod \"watcher-operator-controller-manager-6d9458688d-v8dw9\" (UID: \"272dcd84-1bb6-42cb-8c8e-6851f9f031de\") " pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-v8dw9" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.134566 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-metrics-certs\") pod \"openstack-operator-controller-manager-bb8f85db-bkqk9\" (UID: \"313b5382-60cf-4627-8ba7-a091fc457989\") " pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:21:40 crc kubenswrapper[4775]: E0123 14:21:40.134721 4775 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 14:21:40 crc kubenswrapper[4775]: E0123 14:21:40.134766 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a65a9ef-28c7-46ae-826d-5546af1103a5-cert podName:5a65a9ef-28c7-46ae-826d-5546af1103a5 nodeName:}" failed. No retries permitted until 2026-01-23 14:21:41.134749363 +0000 UTC m=+1048.129578093 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5a65a9ef-28c7-46ae-826d-5546af1103a5-cert") pod "infra-operator-controller-manager-58749ffdfb-mcrj4" (UID: "5a65a9ef-28c7-46ae-826d-5546af1103a5") : secret "infra-operator-webhook-server-cert" not found Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.138082 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2lhsf"] Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.157006 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-vl7m5" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.193348 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmtrg\" (UniqueName: \"kubernetes.io/projected/3d7c7bc6-5124-4cd4-a406-448ca94ba640-kube-api-access-rmtrg\") pod \"ovn-operator-controller-manager-55db956ddc-xst4r\" (UID: \"3d7c7bc6-5124-4cd4-a406-448ca94ba640\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xst4r" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.193876 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96qrp\" (UniqueName: \"kubernetes.io/projected/272dcd84-1bb6-42cb-8c8e-6851f9f031de-kube-api-access-96qrp\") pod \"watcher-operator-controller-manager-6d9458688d-v8dw9\" (UID: \"272dcd84-1bb6-42cb-8c8e-6851f9f031de\") " pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-v8dw9" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.203593 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-jrhlh" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.234561 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-pk9jd"] Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.235166 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-n4k5s" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.235495 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66h6p\" (UniqueName: \"kubernetes.io/projected/f9da51f1-a035-44b8-9391-0d6018a84c61-kube-api-access-66h6p\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2lhsf\" (UID: \"f9da51f1-a035-44b8-9391-0d6018a84c61\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2lhsf" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.235573 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-metrics-certs\") pod \"openstack-operator-controller-manager-bb8f85db-bkqk9\" (UID: \"313b5382-60cf-4627-8ba7-a091fc457989\") " pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.235632 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-webhook-certs\") pod \"openstack-operator-controller-manager-bb8f85db-bkqk9\" (UID: \"313b5382-60cf-4627-8ba7-a091fc457989\") " pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.235659 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x9hk\" (UniqueName: \"kubernetes.io/projected/313b5382-60cf-4627-8ba7-a091fc457989-kube-api-access-7x9hk\") pod \"openstack-operator-controller-manager-bb8f85db-bkqk9\" (UID: \"313b5382-60cf-4627-8ba7-a091fc457989\") " pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:21:40 crc kubenswrapper[4775]: E0123 14:21:40.235914 4775 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 14:21:40 crc kubenswrapper[4775]: E0123 14:21:40.236251 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-webhook-certs podName:313b5382-60cf-4627-8ba7-a091fc457989 nodeName:}" failed. No retries permitted until 2026-01-23 14:21:40.736114258 +0000 UTC m=+1047.730942998 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-webhook-certs") pod "openstack-operator-controller-manager-bb8f85db-bkqk9" (UID: "313b5382-60cf-4627-8ba7-a091fc457989") : secret "webhook-server-cert" not found Jan 23 14:21:40 crc kubenswrapper[4775]: E0123 14:21:40.236402 4775 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 14:21:40 crc kubenswrapper[4775]: E0123 14:21:40.236490 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-metrics-certs podName:313b5382-60cf-4627-8ba7-a091fc457989 nodeName:}" failed. No retries permitted until 2026-01-23 14:21:40.736481779 +0000 UTC m=+1047.731310519 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-metrics-certs") pod "openstack-operator-controller-manager-bb8f85db-bkqk9" (UID: "313b5382-60cf-4627-8ba7-a091fc457989") : secret "metrics-server-cert" not found Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.260226 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x9hk\" (UniqueName: \"kubernetes.io/projected/313b5382-60cf-4627-8ba7-a091fc457989-kube-api-access-7x9hk\") pod \"openstack-operator-controller-manager-bb8f85db-bkqk9\" (UID: \"313b5382-60cf-4627-8ba7-a091fc457989\") " pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.284592 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-nqw74" Jan 23 14:21:40 crc kubenswrapper[4775]: W0123 14:21:40.336185 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56ee00d0_c0f0_442a_bf4a_7335b62c1c4e.slice/crio-afc85f9ff0a5991ece4bbf47c4ef497926f8ea2d8e48f8f279a94d996b32ac39 WatchSource:0}: Error finding container afc85f9ff0a5991ece4bbf47c4ef497926f8ea2d8e48f8f279a94d996b32ac39: Status 404 returned error can't find the container with id afc85f9ff0a5991ece4bbf47c4ef497926f8ea2d8e48f8f279a94d996b32ac39 Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.336440 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66h6p\" (UniqueName: \"kubernetes.io/projected/f9da51f1-a035-44b8-9391-0d6018a84c61-kube-api-access-66h6p\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2lhsf\" (UID: \"f9da51f1-a035-44b8-9391-0d6018a84c61\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2lhsf" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.358534 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66h6p\" (UniqueName: \"kubernetes.io/projected/f9da51f1-a035-44b8-9391-0d6018a84c61-kube-api-access-66h6p\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2lhsf\" (UID: \"f9da51f1-a035-44b8-9391-0d6018a84c61\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2lhsf" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.431648 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-v8dw9" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.437767 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/44a963d8-d403-42d5-acd2-a0379f07db51-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854zk48c\" (UID: \"44a963d8-d403-42d5-acd2-a0379f07db51\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" Jan 23 14:21:40 crc kubenswrapper[4775]: E0123 14:21:40.437979 4775 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 14:21:40 crc kubenswrapper[4775]: E0123 14:21:40.438025 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44a963d8-d403-42d5-acd2-a0379f07db51-cert podName:44a963d8-d403-42d5-acd2-a0379f07db51 nodeName:}" failed. No retries permitted until 2026-01-23 14:21:41.438010445 +0000 UTC m=+1048.432839185 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/44a963d8-d403-42d5-acd2-a0379f07db51-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" (UID: "44a963d8-d403-42d5-acd2-a0379f07db51") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.451598 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xst4r" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.475066 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-69cf5d4557-dz7ft"] Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.484164 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-ppxmc"] Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.551196 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2lhsf" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.665218 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-jq89z"] Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.679903 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sg9x5"] Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.682934 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-xrmvt"] Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.750786 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-metrics-certs\") pod \"openstack-operator-controller-manager-bb8f85db-bkqk9\" (UID: \"313b5382-60cf-4627-8ba7-a091fc457989\") " pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.750884 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-webhook-certs\") pod \"openstack-operator-controller-manager-bb8f85db-bkqk9\" (UID: \"313b5382-60cf-4627-8ba7-a091fc457989\") " pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:21:40 crc kubenswrapper[4775]: E0123 14:21:40.750987 4775 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 14:21:40 crc kubenswrapper[4775]: E0123 14:21:40.751027 4775 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 14:21:40 crc kubenswrapper[4775]: E0123 14:21:40.751039 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-metrics-certs podName:313b5382-60cf-4627-8ba7-a091fc457989 nodeName:}" failed. No retries permitted until 2026-01-23 14:21:41.75102321 +0000 UTC m=+1048.745851950 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-metrics-certs") pod "openstack-operator-controller-manager-bb8f85db-bkqk9" (UID: "313b5382-60cf-4627-8ba7-a091fc457989") : secret "metrics-server-cert" not found Jan 23 14:21:40 crc kubenswrapper[4775]: E0123 14:21:40.751073 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-webhook-certs podName:313b5382-60cf-4627-8ba7-a091fc457989 nodeName:}" failed. No retries permitted until 2026-01-23 14:21:41.751058631 +0000 UTC m=+1048.745887361 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-webhook-certs") pod "openstack-operator-controller-manager-bb8f85db-bkqk9" (UID: "313b5382-60cf-4627-8ba7-a091fc457989") : secret "webhook-server-cert" not found Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.869981 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-jk8vg"] Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.877323 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-bgbpj"] Jan 23 14:21:40 crc kubenswrapper[4775]: W0123 14:21:40.879155 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb6ce8ae_8d3f_4988_9386_6a20487f8ae9.slice/crio-4bde7297fedac1ac3c4c59edc7c48cbc5ede515071e0bdafb5070d227e39937e WatchSource:0}: Error finding container 4bde7297fedac1ac3c4c59edc7c48cbc5ede515071e0bdafb5070d227e39937e: Status 404 returned error can't find the container with id 4bde7297fedac1ac3c4c59edc7c48cbc5ede515071e0bdafb5070d227e39937e Jan 23 14:21:40 crc kubenswrapper[4775]: I0123 14:21:40.887984 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-f7lm6"] Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.066568 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-nqw74"] Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.089454 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-pfdc5"] Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.098734 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-d9495b985-k98mk"] Jan 23 14:21:41 crc kubenswrapper[4775]: W0123 14:21:41.098937 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9710b785_e422_4aca_88e8_e88d26d4e724.slice/crio-999d037cdd9246310854c4612f0c862016dfb68a7ac81ae3beafc0e260f540b2 WatchSource:0}: Error finding container 999d037cdd9246310854c4612f0c862016dfb68a7ac81ae3beafc0e260f540b2: Status 404 returned error can't find the container with id 999d037cdd9246310854c4612f0c862016dfb68a7ac81ae3beafc0e260f540b2 Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.103940 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-sxkzh"] Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.156917 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a65a9ef-28c7-46ae-826d-5546af1103a5-cert\") pod \"infra-operator-controller-manager-58749ffdfb-mcrj4\" (UID: \"5a65a9ef-28c7-46ae-826d-5546af1103a5\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-mcrj4" Jan 23 14:21:41 crc kubenswrapper[4775]: E0123 14:21:41.157163 4775 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 14:21:41 crc kubenswrapper[4775]: E0123 14:21:41.157259 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a65a9ef-28c7-46ae-826d-5546af1103a5-cert podName:5a65a9ef-28c7-46ae-826d-5546af1103a5 nodeName:}" failed. No retries permitted until 2026-01-23 14:21:43.157234323 +0000 UTC m=+1050.152063063 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5a65a9ef-28c7-46ae-826d-5546af1103a5-cert") pod "infra-operator-controller-manager-58749ffdfb-mcrj4" (UID: "5a65a9ef-28c7-46ae-826d-5546af1103a5") : secret "infra-operator-webhook-server-cert" not found Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.237727 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-jrhlh"] Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.240228 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6d9458688d-v8dw9"] Jan 23 14:21:41 crc kubenswrapper[4775]: E0123 14:21:41.265669 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:f2035a0d3a8cc9434ab118078297f08cb8f3df98d1c75005279ee7915a3c2551,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-96qrp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-6d9458688d-v8dw9_openstack-operators(272dcd84-1bb6-42cb-8c8e-6851f9f031de): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 14:21:41 crc kubenswrapper[4775]: E0123 14:21:41.267108 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-v8dw9" podUID="272dcd84-1bb6-42cb-8c8e-6851f9f031de" Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.269108 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-xst4r"] Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.274404 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-xrmvt" event={"ID":"841fb528-61a8-445e-a135-be26295bc975","Type":"ContainerStarted","Data":"db229de7b8a6238a98edaf9bf26b33f4e350ab7fb3bb77c280ee1da12f4c6a0c"} Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.276899 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5d646b7d76-n4k5s"] Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.276923 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-ppxmc" event={"ID":"352223d5-fa0a-43df-8bad-0eaa9b6b439d","Type":"ContainerStarted","Data":"e7a2b0e0a45bac63400217ceb07fdc94a77e510171269f4ff2a05df993bbb5ac"} Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.280158 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-nqw74" event={"ID":"ecef6080-ea2c-43f4-8ffa-da2ceb59369d","Type":"ContainerStarted","Data":"129ee5e5da3d8ef7d68f73ab6068a7925151a9cb63fa839879397f449acc7e9b"} Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.281123 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7bd9774b6-vl7m5"] Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.281537 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-pk9jd" event={"ID":"56ee00d0-c0f0-442a-bf4a-7335b62c1c4e","Type":"ContainerStarted","Data":"afc85f9ff0a5991ece4bbf47c4ef497926f8ea2d8e48f8f279a94d996b32ac39"} Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.284037 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bgbpj" event={"ID":"0784c928-e0c5-4afb-99cb-4f1f96820a14","Type":"ContainerStarted","Data":"958fec355ec2e927879ead1cf096153deae15a96fe02055adf6702c4956f8c4c"} Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.284820 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-xtmz8"] Jan 23 14:21:41 crc kubenswrapper[4775]: E0123 14:21:41.286248 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-srgq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5d646b7d76-n4k5s_openstack-operators(072b9a9d-8a08-454c-b1b6-628fcdcc91df): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.286877 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-pfdc5" event={"ID":"853c6152-25bf-4374-a941-f9cd4202c87f","Type":"ContainerStarted","Data":"fef82658e900a2f6a85823adcd2016a932b07053cbc2df5d3a903112c2e396ad"} Jan 23 14:21:41 crc kubenswrapper[4775]: E0123 14:21:41.287037 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rmtrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-55db956ddc-xst4r_openstack-operators(3d7c7bc6-5124-4cd4-a406-448ca94ba640): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 14:21:41 crc kubenswrapper[4775]: E0123 14:21:41.287452 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-n4k5s" podUID="072b9a9d-8a08-454c-b1b6-628fcdcc91df" Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.287702 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-jq89z" event={"ID":"64bae0eb-d703-4058-a545-b42d62045b90","Type":"ContainerStarted","Data":"a46114bb2373d705985767a513ff41fdc1f93b36ba78974b9d5075f550ed10e2"} Jan 23 14:21:41 crc kubenswrapper[4775]: E0123 14:21:41.288138 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xst4r" podUID="3d7c7bc6-5124-4cd4-a406-448ca94ba640" Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.288675 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-dz7ft" event={"ID":"9ce79c2a-2c52-48de-80a6-887d592578d3","Type":"ContainerStarted","Data":"cdee4c6c9c1415ea8b74f300719a5a9b250b0b993d77e077cc6df21c62092136"} Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.289554 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sg9x5" event={"ID":"d9e69fcf-58c9-45fe-a291-4628c8219e10","Type":"ContainerStarted","Data":"0f672fce2a89dedd21cdfb294c8a56e8ee9bf30c43ab00ad718c95b3f67c6829"} Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.290263 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sxkzh" event={"ID":"9710b785-e422-4aca-88e8-e88d26d4e724","Type":"ContainerStarted","Data":"999d037cdd9246310854c4612f0c862016dfb68a7ac81ae3beafc0e260f540b2"} Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.291101 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-jk8vg" event={"ID":"bb6ce8ae-8d3f-4988-9386-6a20487f8ae9","Type":"ContainerStarted","Data":"4bde7297fedac1ac3c4c59edc7c48cbc5ede515071e0bdafb5070d227e39937e"} Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.291792 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-d9495b985-k98mk" event={"ID":"9bad88d6-5ca9-4176-904d-72b793e1361e","Type":"ContainerStarted","Data":"3b31a7012ea48421023dcf9b284625ce3e8507aa2773ce103b29a5ca80ded146"} Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.293801 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-jrhlh" event={"ID":"91da96b4-921a-4b88-9804-55745989e08b","Type":"ContainerStarted","Data":"b768501f2b234575b918947ca54de948aadb0d4b42b85fe4e56f4c86accc286b"} Jan 23 14:21:41 crc kubenswrapper[4775]: W0123 14:21:41.294966 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f9597bf_12a1_4204_ac57_37c4c0189687.slice/crio-cdaa6e19cea03536eb78c59984213ddde02fb48f352b557bbdf3ac5f35173545 WatchSource:0}: Error finding container cdaa6e19cea03536eb78c59984213ddde02fb48f352b557bbdf3ac5f35173545: Status 404 returned error can't find the container with id cdaa6e19cea03536eb78c59984213ddde02fb48f352b557bbdf3ac5f35173545 Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.295157 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-f7lm6" event={"ID":"d98bebb2-a42a-45a6-b452-a82ce1f62896","Type":"ContainerStarted","Data":"b51d02d91198b6b29ea8129d6fa27afd58f540ccf1eb03aff6c21935fd28f0ce"} Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.296707 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-v8dw9" event={"ID":"272dcd84-1bb6-42cb-8c8e-6851f9f031de","Type":"ContainerStarted","Data":"4c6e90c804a052d4f5b2b0202990e850b6a6f56a6d4f3819524b4b0d210b287c"} Jan 23 14:21:41 crc kubenswrapper[4775]: E0123 14:21:41.297834 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lj998,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-xtmz8_openstack-operators(9f9597bf-12a1-4204-ac57-37c4c0189687): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 14:21:41 crc kubenswrapper[4775]: E0123 14:21:41.299166 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:f2035a0d3a8cc9434ab118078297f08cb8f3df98d1c75005279ee7915a3c2551\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-v8dw9" podUID="272dcd84-1bb6-42cb-8c8e-6851f9f031de" Jan 23 14:21:41 crc kubenswrapper[4775]: W0123 14:21:41.299176 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda07598ff_60cc_482e_a551_af751575709c.slice/crio-5d9da5130a65ba46334b94e3e5cbb27f78fe17253f94ce08bc3da0be8ebcea41 WatchSource:0}: Error finding container 5d9da5130a65ba46334b94e3e5cbb27f78fe17253f94ce08bc3da0be8ebcea41: Status 404 returned error can't find the container with id 5d9da5130a65ba46334b94e3e5cbb27f78fe17253f94ce08bc3da0be8ebcea41 Jan 23 14:21:41 crc kubenswrapper[4775]: E0123 14:21:41.299229 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xtmz8" podUID="9f9597bf-12a1-4204-ac57-37c4c0189687" Jan 23 14:21:41 crc kubenswrapper[4775]: E0123 14:21:41.302522 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-frhxs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7bd9774b6-vl7m5_openstack-operators(a07598ff-60cc-482e-a551-af751575709c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 23 14:21:41 crc kubenswrapper[4775]: E0123 14:21:41.303847 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-vl7m5" podUID="a07598ff-60cc-482e-a551-af751575709c" Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.374117 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2lhsf"] Jan 23 14:21:41 crc kubenswrapper[4775]: W0123 14:21:41.379090 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9da51f1_a035_44b8_9391_0d6018a84c61.slice/crio-e7bdedf6a779157f8c7e65eb8d9825ebae7b5753439e0a89e1b7badf688c1052 WatchSource:0}: Error finding container e7bdedf6a779157f8c7e65eb8d9825ebae7b5753439e0a89e1b7badf688c1052: Status 404 returned error can't find the container with id e7bdedf6a779157f8c7e65eb8d9825ebae7b5753439e0a89e1b7badf688c1052 Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.463826 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/44a963d8-d403-42d5-acd2-a0379f07db51-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854zk48c\" (UID: \"44a963d8-d403-42d5-acd2-a0379f07db51\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" Jan 23 14:21:41 crc kubenswrapper[4775]: E0123 14:21:41.464018 4775 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 14:21:41 crc kubenswrapper[4775]: E0123 14:21:41.464070 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44a963d8-d403-42d5-acd2-a0379f07db51-cert podName:44a963d8-d403-42d5-acd2-a0379f07db51 nodeName:}" failed. No retries permitted until 2026-01-23 14:21:43.464055667 +0000 UTC m=+1050.458884407 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/44a963d8-d403-42d5-acd2-a0379f07db51-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" (UID: "44a963d8-d403-42d5-acd2-a0379f07db51") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.770347 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-webhook-certs\") pod \"openstack-operator-controller-manager-bb8f85db-bkqk9\" (UID: \"313b5382-60cf-4627-8ba7-a091fc457989\") " pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:21:41 crc kubenswrapper[4775]: I0123 14:21:41.770539 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-metrics-certs\") pod \"openstack-operator-controller-manager-bb8f85db-bkqk9\" (UID: \"313b5382-60cf-4627-8ba7-a091fc457989\") " pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:21:41 crc kubenswrapper[4775]: E0123 14:21:41.770719 4775 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 14:21:41 crc kubenswrapper[4775]: E0123 14:21:41.770776 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-metrics-certs podName:313b5382-60cf-4627-8ba7-a091fc457989 nodeName:}" failed. No retries permitted until 2026-01-23 14:21:43.770760669 +0000 UTC m=+1050.765589409 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-metrics-certs") pod "openstack-operator-controller-manager-bb8f85db-bkqk9" (UID: "313b5382-60cf-4627-8ba7-a091fc457989") : secret "metrics-server-cert" not found Jan 23 14:21:41 crc kubenswrapper[4775]: E0123 14:21:41.771091 4775 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 14:21:41 crc kubenswrapper[4775]: E0123 14:21:41.771123 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-webhook-certs podName:313b5382-60cf-4627-8ba7-a091fc457989 nodeName:}" failed. No retries permitted until 2026-01-23 14:21:43.77111545 +0000 UTC m=+1050.765944190 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-webhook-certs") pod "openstack-operator-controller-manager-bb8f85db-bkqk9" (UID: "313b5382-60cf-4627-8ba7-a091fc457989") : secret "webhook-server-cert" not found Jan 23 14:21:42 crc kubenswrapper[4775]: I0123 14:21:42.307819 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xst4r" event={"ID":"3d7c7bc6-5124-4cd4-a406-448ca94ba640","Type":"ContainerStarted","Data":"bd9242543a7f5b2dc5838441799986fb353563af73ab096522e1bbd88214b2f2"} Jan 23 14:21:42 crc kubenswrapper[4775]: E0123 14:21:42.309240 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xst4r" podUID="3d7c7bc6-5124-4cd4-a406-448ca94ba640" Jan 23 14:21:42 crc kubenswrapper[4775]: I0123 14:21:42.320188 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-vl7m5" event={"ID":"a07598ff-60cc-482e-a551-af751575709c","Type":"ContainerStarted","Data":"5d9da5130a65ba46334b94e3e5cbb27f78fe17253f94ce08bc3da0be8ebcea41"} Jan 23 14:21:42 crc kubenswrapper[4775]: I0123 14:21:42.330276 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2lhsf" event={"ID":"f9da51f1-a035-44b8-9391-0d6018a84c61","Type":"ContainerStarted","Data":"e7bdedf6a779157f8c7e65eb8d9825ebae7b5753439e0a89e1b7badf688c1052"} Jan 23 14:21:42 crc kubenswrapper[4775]: I0123 14:21:42.343787 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xtmz8" event={"ID":"9f9597bf-12a1-4204-ac57-37c4c0189687","Type":"ContainerStarted","Data":"cdaa6e19cea03536eb78c59984213ddde02fb48f352b557bbdf3ac5f35173545"} Jan 23 14:21:42 crc kubenswrapper[4775]: I0123 14:21:42.346682 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-n4k5s" event={"ID":"072b9a9d-8a08-454c-b1b6-628fcdcc91df","Type":"ContainerStarted","Data":"5ebd6b37eb25b61b551dc8eb0bd3c831f4061873c635f732fe2b1f83b21bbc42"} Jan 23 14:21:42 crc kubenswrapper[4775]: E0123 14:21:42.351008 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-vl7m5" podUID="a07598ff-60cc-482e-a551-af751575709c" Jan 23 14:21:42 crc kubenswrapper[4775]: E0123 14:21:42.351197 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:f2035a0d3a8cc9434ab118078297f08cb8f3df98d1c75005279ee7915a3c2551\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-v8dw9" podUID="272dcd84-1bb6-42cb-8c8e-6851f9f031de" Jan 23 14:21:42 crc kubenswrapper[4775]: E0123 14:21:42.351271 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xtmz8" podUID="9f9597bf-12a1-4204-ac57-37c4c0189687" Jan 23 14:21:42 crc kubenswrapper[4775]: E0123 14:21:42.351306 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-n4k5s" podUID="072b9a9d-8a08-454c-b1b6-628fcdcc91df" Jan 23 14:21:43 crc kubenswrapper[4775]: I0123 14:21:43.191544 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a65a9ef-28c7-46ae-826d-5546af1103a5-cert\") pod \"infra-operator-controller-manager-58749ffdfb-mcrj4\" (UID: \"5a65a9ef-28c7-46ae-826d-5546af1103a5\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-mcrj4" Jan 23 14:21:43 crc kubenswrapper[4775]: E0123 14:21:43.191729 4775 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 14:21:43 crc kubenswrapper[4775]: E0123 14:21:43.191817 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a65a9ef-28c7-46ae-826d-5546af1103a5-cert podName:5a65a9ef-28c7-46ae-826d-5546af1103a5 nodeName:}" failed. No retries permitted until 2026-01-23 14:21:47.191787041 +0000 UTC m=+1054.186615771 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5a65a9ef-28c7-46ae-826d-5546af1103a5-cert") pod "infra-operator-controller-manager-58749ffdfb-mcrj4" (UID: "5a65a9ef-28c7-46ae-826d-5546af1103a5") : secret "infra-operator-webhook-server-cert" not found Jan 23 14:21:43 crc kubenswrapper[4775]: E0123 14:21:43.365845 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:a8fc8f9d445b1232f446119015b226008b07c6a259f5bebc1fcbb39ec310afe5\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-vl7m5" podUID="a07598ff-60cc-482e-a551-af751575709c" Jan 23 14:21:43 crc kubenswrapper[4775]: E0123 14:21:43.365951 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xtmz8" podUID="9f9597bf-12a1-4204-ac57-37c4c0189687" Jan 23 14:21:43 crc kubenswrapper[4775]: E0123 14:21:43.366028 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:65cfe5b9d5b0571aaf8ff9840b12cc56e90ca4cef162dd260c3a9fa2b52c6dd0\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-n4k5s" podUID="072b9a9d-8a08-454c-b1b6-628fcdcc91df" Jan 23 14:21:43 crc kubenswrapper[4775]: E0123 14:21:43.366108 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xst4r" podUID="3d7c7bc6-5124-4cd4-a406-448ca94ba640" Jan 23 14:21:43 crc kubenswrapper[4775]: I0123 14:21:43.495557 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/44a963d8-d403-42d5-acd2-a0379f07db51-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854zk48c\" (UID: \"44a963d8-d403-42d5-acd2-a0379f07db51\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" Jan 23 14:21:43 crc kubenswrapper[4775]: E0123 14:21:43.495703 4775 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 14:21:43 crc kubenswrapper[4775]: E0123 14:21:43.495750 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44a963d8-d403-42d5-acd2-a0379f07db51-cert podName:44a963d8-d403-42d5-acd2-a0379f07db51 nodeName:}" failed. No retries permitted until 2026-01-23 14:21:47.495735733 +0000 UTC m=+1054.490564473 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/44a963d8-d403-42d5-acd2-a0379f07db51-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" (UID: "44a963d8-d403-42d5-acd2-a0379f07db51") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 14:21:43 crc kubenswrapper[4775]: I0123 14:21:43.802478 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-metrics-certs\") pod \"openstack-operator-controller-manager-bb8f85db-bkqk9\" (UID: \"313b5382-60cf-4627-8ba7-a091fc457989\") " pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:21:43 crc kubenswrapper[4775]: I0123 14:21:43.802634 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-webhook-certs\") pod \"openstack-operator-controller-manager-bb8f85db-bkqk9\" (UID: \"313b5382-60cf-4627-8ba7-a091fc457989\") " pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:21:43 crc kubenswrapper[4775]: E0123 14:21:43.802662 4775 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 14:21:43 crc kubenswrapper[4775]: E0123 14:21:43.802745 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-metrics-certs podName:313b5382-60cf-4627-8ba7-a091fc457989 nodeName:}" failed. No retries permitted until 2026-01-23 14:21:47.802727714 +0000 UTC m=+1054.797556454 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-metrics-certs") pod "openstack-operator-controller-manager-bb8f85db-bkqk9" (UID: "313b5382-60cf-4627-8ba7-a091fc457989") : secret "metrics-server-cert" not found Jan 23 14:21:43 crc kubenswrapper[4775]: E0123 14:21:43.803355 4775 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 14:21:43 crc kubenswrapper[4775]: E0123 14:21:43.803433 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-webhook-certs podName:313b5382-60cf-4627-8ba7-a091fc457989 nodeName:}" failed. No retries permitted until 2026-01-23 14:21:47.803417224 +0000 UTC m=+1054.798245964 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-webhook-certs") pod "openstack-operator-controller-manager-bb8f85db-bkqk9" (UID: "313b5382-60cf-4627-8ba7-a091fc457989") : secret "webhook-server-cert" not found Jan 23 14:21:47 crc kubenswrapper[4775]: I0123 14:21:47.254720 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a65a9ef-28c7-46ae-826d-5546af1103a5-cert\") pod \"infra-operator-controller-manager-58749ffdfb-mcrj4\" (UID: \"5a65a9ef-28c7-46ae-826d-5546af1103a5\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-mcrj4" Jan 23 14:21:47 crc kubenswrapper[4775]: E0123 14:21:47.254903 4775 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 23 14:21:47 crc kubenswrapper[4775]: E0123 14:21:47.255652 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a65a9ef-28c7-46ae-826d-5546af1103a5-cert podName:5a65a9ef-28c7-46ae-826d-5546af1103a5 nodeName:}" failed. No retries permitted until 2026-01-23 14:21:55.255629167 +0000 UTC m=+1062.250457917 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5a65a9ef-28c7-46ae-826d-5546af1103a5-cert") pod "infra-operator-controller-manager-58749ffdfb-mcrj4" (UID: "5a65a9ef-28c7-46ae-826d-5546af1103a5") : secret "infra-operator-webhook-server-cert" not found Jan 23 14:21:47 crc kubenswrapper[4775]: I0123 14:21:47.568720 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/44a963d8-d403-42d5-acd2-a0379f07db51-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854zk48c\" (UID: \"44a963d8-d403-42d5-acd2-a0379f07db51\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" Jan 23 14:21:47 crc kubenswrapper[4775]: E0123 14:21:47.569165 4775 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 14:21:47 crc kubenswrapper[4775]: E0123 14:21:47.569211 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44a963d8-d403-42d5-acd2-a0379f07db51-cert podName:44a963d8-d403-42d5-acd2-a0379f07db51 nodeName:}" failed. No retries permitted until 2026-01-23 14:21:55.569198527 +0000 UTC m=+1062.564027267 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/44a963d8-d403-42d5-acd2-a0379f07db51-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" (UID: "44a963d8-d403-42d5-acd2-a0379f07db51") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 23 14:21:47 crc kubenswrapper[4775]: I0123 14:21:47.876199 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-webhook-certs\") pod \"openstack-operator-controller-manager-bb8f85db-bkqk9\" (UID: \"313b5382-60cf-4627-8ba7-a091fc457989\") " pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:21:47 crc kubenswrapper[4775]: I0123 14:21:47.876301 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-metrics-certs\") pod \"openstack-operator-controller-manager-bb8f85db-bkqk9\" (UID: \"313b5382-60cf-4627-8ba7-a091fc457989\") " pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:21:47 crc kubenswrapper[4775]: E0123 14:21:47.876427 4775 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 14:21:47 crc kubenswrapper[4775]: E0123 14:21:47.876441 4775 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 14:21:47 crc kubenswrapper[4775]: E0123 14:21:47.876478 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-metrics-certs podName:313b5382-60cf-4627-8ba7-a091fc457989 nodeName:}" failed. No retries permitted until 2026-01-23 14:21:55.876465075 +0000 UTC m=+1062.871293815 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-metrics-certs") pod "openstack-operator-controller-manager-bb8f85db-bkqk9" (UID: "313b5382-60cf-4627-8ba7-a091fc457989") : secret "metrics-server-cert" not found Jan 23 14:21:47 crc kubenswrapper[4775]: E0123 14:21:47.876564 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-webhook-certs podName:313b5382-60cf-4627-8ba7-a091fc457989 nodeName:}" failed. No retries permitted until 2026-01-23 14:21:55.876543778 +0000 UTC m=+1062.871372518 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-webhook-certs") pod "openstack-operator-controller-manager-bb8f85db-bkqk9" (UID: "313b5382-60cf-4627-8ba7-a091fc457989") : secret "webhook-server-cert" not found Jan 23 14:21:54 crc kubenswrapper[4775]: E0123 14:21:54.492444 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127" Jan 23 14:21:54 crc kubenswrapper[4775]: E0123 14:21:54.492998 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-87fmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-jrhlh_openstack-operators(91da96b4-921a-4b88-9804-55745989e08b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 14:21:54 crc kubenswrapper[4775]: E0123 14:21:54.494853 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-jrhlh" podUID="91da96b4-921a-4b88-9804-55745989e08b" Jan 23 14:21:55 crc kubenswrapper[4775]: I0123 14:21:55.286742 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a65a9ef-28c7-46ae-826d-5546af1103a5-cert\") pod \"infra-operator-controller-manager-58749ffdfb-mcrj4\" (UID: \"5a65a9ef-28c7-46ae-826d-5546af1103a5\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-mcrj4" Jan 23 14:21:55 crc kubenswrapper[4775]: I0123 14:21:55.295045 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a65a9ef-28c7-46ae-826d-5546af1103a5-cert\") pod \"infra-operator-controller-manager-58749ffdfb-mcrj4\" (UID: \"5a65a9ef-28c7-46ae-826d-5546af1103a5\") " pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-mcrj4" Jan 23 14:21:55 crc kubenswrapper[4775]: I0123 14:21:55.402081 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-mcrj4" Jan 23 14:21:55 crc kubenswrapper[4775]: E0123 14:21:55.454419 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-jrhlh" podUID="91da96b4-921a-4b88-9804-55745989e08b" Jan 23 14:21:55 crc kubenswrapper[4775]: I0123 14:21:55.590467 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/44a963d8-d403-42d5-acd2-a0379f07db51-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854zk48c\" (UID: \"44a963d8-d403-42d5-acd2-a0379f07db51\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" Jan 23 14:21:55 crc kubenswrapper[4775]: I0123 14:21:55.595573 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/44a963d8-d403-42d5-acd2-a0379f07db51-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854zk48c\" (UID: \"44a963d8-d403-42d5-acd2-a0379f07db51\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" Jan 23 14:21:55 crc kubenswrapper[4775]: E0123 14:21:55.622484 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/openstack-k8s-operators/nova-operator:232d61b7408febabff72594b5471873243247e20" Jan 23 14:21:55 crc kubenswrapper[4775]: E0123 14:21:55.622534 4775 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.132:5001/openstack-k8s-operators/nova-operator:232d61b7408febabff72594b5471873243247e20" Jan 23 14:21:55 crc kubenswrapper[4775]: E0123 14:21:55.624367 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.132:5001/openstack-k8s-operators/nova-operator:232d61b7408febabff72594b5471873243247e20,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gskfg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-d9495b985-k98mk_openstack-operators(9bad88d6-5ca9-4176-904d-72b793e1361e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 14:21:55 crc kubenswrapper[4775]: E0123 14:21:55.625669 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-d9495b985-k98mk" podUID="9bad88d6-5ca9-4176-904d-72b793e1361e" Jan 23 14:21:55 crc kubenswrapper[4775]: I0123 14:21:55.778640 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" Jan 23 14:21:55 crc kubenswrapper[4775]: I0123 14:21:55.896154 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-metrics-certs\") pod \"openstack-operator-controller-manager-bb8f85db-bkqk9\" (UID: \"313b5382-60cf-4627-8ba7-a091fc457989\") " pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:21:55 crc kubenswrapper[4775]: I0123 14:21:55.896284 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-webhook-certs\") pod \"openstack-operator-controller-manager-bb8f85db-bkqk9\" (UID: \"313b5382-60cf-4627-8ba7-a091fc457989\") " pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:21:55 crc kubenswrapper[4775]: E0123 14:21:55.896419 4775 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 23 14:21:55 crc kubenswrapper[4775]: E0123 14:21:55.896515 4775 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 23 14:21:55 crc kubenswrapper[4775]: E0123 14:21:55.896539 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-metrics-certs podName:313b5382-60cf-4627-8ba7-a091fc457989 nodeName:}" failed. No retries permitted until 2026-01-23 14:22:11.896511528 +0000 UTC m=+1078.891340298 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-metrics-certs") pod "openstack-operator-controller-manager-bb8f85db-bkqk9" (UID: "313b5382-60cf-4627-8ba7-a091fc457989") : secret "metrics-server-cert" not found Jan 23 14:21:55 crc kubenswrapper[4775]: E0123 14:21:55.896673 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-webhook-certs podName:313b5382-60cf-4627-8ba7-a091fc457989 nodeName:}" failed. No retries permitted until 2026-01-23 14:22:11.896629071 +0000 UTC m=+1078.891457831 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-webhook-certs") pod "openstack-operator-controller-manager-bb8f85db-bkqk9" (UID: "313b5382-60cf-4627-8ba7-a091fc457989") : secret "webhook-server-cert" not found Jan 23 14:21:56 crc kubenswrapper[4775]: E0123 14:21:56.216350 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 23 14:21:56 crc kubenswrapper[4775]: E0123 14:21:56.216552 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-th6g4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-bgbpj_openstack-operators(0784c928-e0c5-4afb-99cb-4f1f96820a14): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 14:21:56 crc kubenswrapper[4775]: E0123 14:21:56.217759 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bgbpj" podUID="0784c928-e0c5-4afb-99cb-4f1f96820a14" Jan 23 14:21:56 crc kubenswrapper[4775]: E0123 14:21:56.461199 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bgbpj" podUID="0784c928-e0c5-4afb-99cb-4f1f96820a14" Jan 23 14:21:56 crc kubenswrapper[4775]: E0123 14:21:56.462858 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.132:5001/openstack-k8s-operators/nova-operator:232d61b7408febabff72594b5471873243247e20\\\"\"" pod="openstack-operators/nova-operator-controller-manager-d9495b985-k98mk" podUID="9bad88d6-5ca9-4176-904d-72b793e1361e" Jan 23 14:21:56 crc kubenswrapper[4775]: E0123 14:21:56.918576 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 23 14:21:56 crc kubenswrapper[4775]: E0123 14:21:56.919410 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-66h6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-2lhsf_openstack-operators(f9da51f1-a035-44b8-9391-0d6018a84c61): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 14:21:56 crc kubenswrapper[4775]: E0123 14:21:56.920590 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2lhsf" podUID="f9da51f1-a035-44b8-9391-0d6018a84c61" Jan 23 14:21:57 crc kubenswrapper[4775]: E0123 14:21:57.465516 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2lhsf" podUID="f9da51f1-a035-44b8-9391-0d6018a84c61" Jan 23 14:21:58 crc kubenswrapper[4775]: I0123 14:21:58.023833 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854zk48c"] Jan 23 14:21:58 crc kubenswrapper[4775]: I0123 14:21:58.135467 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-58749ffdfb-mcrj4"] Jan 23 14:21:58 crc kubenswrapper[4775]: I0123 14:21:58.489249 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-dz7ft" event={"ID":"9ce79c2a-2c52-48de-80a6-887d592578d3","Type":"ContainerStarted","Data":"db2f2688bae5a6164e4165906cb369018cb9bf1f1fec4f02a36b65d09715a616"} Jan 23 14:21:58 crc kubenswrapper[4775]: I0123 14:21:58.489422 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-dz7ft" Jan 23 14:21:58 crc kubenswrapper[4775]: I0123 14:21:58.506452 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-dz7ft" podStartSLOduration=2.270703579 podStartE2EDuration="19.506440269s" podCreationTimestamp="2026-01-23 14:21:39 +0000 UTC" firstStartedPulling="2026-01-23 14:21:40.521637747 +0000 UTC m=+1047.516466487" lastFinishedPulling="2026-01-23 14:21:57.757374437 +0000 UTC m=+1064.752203177" observedRunningTime="2026-01-23 14:21:58.501155056 +0000 UTC m=+1065.495983796" watchObservedRunningTime="2026-01-23 14:21:58.506440269 +0000 UTC m=+1065.501269009" Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.502044 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-pk9jd" event={"ID":"56ee00d0-c0f0-442a-bf4a-7335b62c1c4e","Type":"ContainerStarted","Data":"9330b422d03f293059c950c7db15fdd3f7d2cc166a990c773a43896fc66171fa"} Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.502314 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-pk9jd" Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.504343 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sxkzh" event={"ID":"9710b785-e422-4aca-88e8-e88d26d4e724","Type":"ContainerStarted","Data":"535ef816c904367f450fc1654c575b1abb5d35264b246bfd070d4c2f8d7d5844"} Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.504449 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sxkzh" Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.506516 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-jk8vg" event={"ID":"bb6ce8ae-8d3f-4988-9386-6a20487f8ae9","Type":"ContainerStarted","Data":"88519d0f3908ad731c3a7968295ae958359de101c7157acb43cc04342089d0a2"} Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.506610 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-jk8vg" Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.509335 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-ppxmc" event={"ID":"352223d5-fa0a-43df-8bad-0eaa9b6b439d","Type":"ContainerStarted","Data":"fdf6f94fd5f0c6b8722cedfc75f36c5410139682daa13027b10a211eaa2745a9"} Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.509485 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-ppxmc" Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.524561 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-f7lm6" event={"ID":"d98bebb2-a42a-45a6-b452-a82ce1f62896","Type":"ContainerStarted","Data":"14ea6b409e153fe002358dd5bfe9cdc2dc004f11abb83eda5d7d78dcae47afb4"} Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.524633 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-f7lm6" Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.527937 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-pk9jd" podStartSLOduration=4.685761987 podStartE2EDuration="20.52792147s" podCreationTimestamp="2026-01-23 14:21:39 +0000 UTC" firstStartedPulling="2026-01-23 14:21:40.341438348 +0000 UTC m=+1047.336267088" lastFinishedPulling="2026-01-23 14:21:56.183597821 +0000 UTC m=+1063.178426571" observedRunningTime="2026-01-23 14:21:59.521060681 +0000 UTC m=+1066.515889421" watchObservedRunningTime="2026-01-23 14:21:59.52792147 +0000 UTC m=+1066.522750210" Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.533734 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-jq89z" event={"ID":"64bae0eb-d703-4058-a545-b42d62045b90","Type":"ContainerStarted","Data":"294ee6684d9c624a8b330cef15053321999f775702b0c0f9705aa37e9b0baf09"} Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.534013 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-jq89z" Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.547784 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-nqw74" event={"ID":"ecef6080-ea2c-43f4-8ffa-da2ceb59369d","Type":"ContainerStarted","Data":"7227dac76c0563e3c738a508fd7da403d5acc1558b68845afcb501cb6b3d3ef6"} Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.548600 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-nqw74" Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.548877 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sxkzh" podStartSLOduration=3.89476698 podStartE2EDuration="20.548866596s" podCreationTimestamp="2026-01-23 14:21:39 +0000 UTC" firstStartedPulling="2026-01-23 14:21:41.104244529 +0000 UTC m=+1048.099073269" lastFinishedPulling="2026-01-23 14:21:57.758344145 +0000 UTC m=+1064.753172885" observedRunningTime="2026-01-23 14:21:59.546674163 +0000 UTC m=+1066.541502903" watchObservedRunningTime="2026-01-23 14:21:59.548866596 +0000 UTC m=+1066.543695336" Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.558954 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sg9x5" event={"ID":"d9e69fcf-58c9-45fe-a291-4628c8219e10","Type":"ContainerStarted","Data":"9e71fa10f945cc09c631809e9d89bebe28368cdb354be49f777531c649fc5ae0"} Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.559205 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sg9x5" Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.565187 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-mcrj4" event={"ID":"5a65a9ef-28c7-46ae-826d-5546af1103a5","Type":"ContainerStarted","Data":"8ba5118e73f150a190f2c88b50539b48e57c3027e70e8fcb074ab8b45a5f964c"} Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.583438 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-v8dw9" event={"ID":"272dcd84-1bb6-42cb-8c8e-6851f9f031de","Type":"ContainerStarted","Data":"3f320b2468e700d95def8fd2495ca104ef38317f762aa0858e061410de74c51c"} Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.584232 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-v8dw9" Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.585368 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-xrmvt" event={"ID":"841fb528-61a8-445e-a135-be26295bc975","Type":"ContainerStarted","Data":"99b663305e7965ad563cb9d0cdc6187333cf27cd90c3f22189c586efc3c3b6ba"} Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.585537 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-xrmvt" Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.586441 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-pfdc5" event={"ID":"853c6152-25bf-4374-a941-f9cd4202c87f","Type":"ContainerStarted","Data":"9df83a8634e90cbf428f3981b67ea7ef5b1edc562176e8a76bf691023f48a202"} Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.586867 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-pfdc5" Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.594292 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" event={"ID":"44a963d8-d403-42d5-acd2-a0379f07db51","Type":"ContainerStarted","Data":"b322d6a71f52beec10b6d0e0dd450c9ec4edd88a9e4b64ae89a7ca4b30b46405"} Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.631554 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-ppxmc" podStartSLOduration=5.046816142 podStartE2EDuration="20.63153831s" podCreationTimestamp="2026-01-23 14:21:39 +0000 UTC" firstStartedPulling="2026-01-23 14:21:40.59877169 +0000 UTC m=+1047.593600430" lastFinishedPulling="2026-01-23 14:21:56.183493848 +0000 UTC m=+1063.178322598" observedRunningTime="2026-01-23 14:21:59.604154347 +0000 UTC m=+1066.598983097" watchObservedRunningTime="2026-01-23 14:21:59.63153831 +0000 UTC m=+1066.626367050" Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.656681 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-nqw74" podStartSLOduration=5.555633577 podStartE2EDuration="20.656662388s" podCreationTimestamp="2026-01-23 14:21:39 +0000 UTC" firstStartedPulling="2026-01-23 14:21:41.083380764 +0000 UTC m=+1048.078209504" lastFinishedPulling="2026-01-23 14:21:56.184409575 +0000 UTC m=+1063.179238315" observedRunningTime="2026-01-23 14:21:59.656186454 +0000 UTC m=+1066.651015194" watchObservedRunningTime="2026-01-23 14:21:59.656662388 +0000 UTC m=+1066.651491128" Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.658322 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-jk8vg" podStartSLOduration=5.360930919 podStartE2EDuration="20.658315186s" podCreationTimestamp="2026-01-23 14:21:39 +0000 UTC" firstStartedPulling="2026-01-23 14:21:40.887024248 +0000 UTC m=+1047.881852988" lastFinishedPulling="2026-01-23 14:21:56.184408525 +0000 UTC m=+1063.179237255" observedRunningTime="2026-01-23 14:21:59.631926782 +0000 UTC m=+1066.626755522" watchObservedRunningTime="2026-01-23 14:21:59.658315186 +0000 UTC m=+1066.653143926" Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.712374 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-jq89z" podStartSLOduration=3.702679398 podStartE2EDuration="20.712354671s" podCreationTimestamp="2026-01-23 14:21:39 +0000 UTC" firstStartedPulling="2026-01-23 14:21:40.747442086 +0000 UTC m=+1047.742270816" lastFinishedPulling="2026-01-23 14:21:57.757117349 +0000 UTC m=+1064.751946089" observedRunningTime="2026-01-23 14:21:59.706923884 +0000 UTC m=+1066.701752624" watchObservedRunningTime="2026-01-23 14:21:59.712354671 +0000 UTC m=+1066.707183411" Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.772224 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-v8dw9" podStartSLOduration=3.114418791 podStartE2EDuration="20.772205034s" podCreationTimestamp="2026-01-23 14:21:39 +0000 UTC" firstStartedPulling="2026-01-23 14:21:41.26555919 +0000 UTC m=+1048.260387930" lastFinishedPulling="2026-01-23 14:21:58.923345433 +0000 UTC m=+1065.918174173" observedRunningTime="2026-01-23 14:21:59.747100697 +0000 UTC m=+1066.741929437" watchObservedRunningTime="2026-01-23 14:21:59.772205034 +0000 UTC m=+1066.767033774" Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.819090 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-f7lm6" podStartSLOduration=3.949819813 podStartE2EDuration="20.819071071s" podCreationTimestamp="2026-01-23 14:21:39 +0000 UTC" firstStartedPulling="2026-01-23 14:21:40.886903164 +0000 UTC m=+1047.881731904" lastFinishedPulling="2026-01-23 14:21:57.756154422 +0000 UTC m=+1064.750983162" observedRunningTime="2026-01-23 14:21:59.776784847 +0000 UTC m=+1066.771613587" watchObservedRunningTime="2026-01-23 14:21:59.819071071 +0000 UTC m=+1066.813899811" Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.833402 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-xrmvt" podStartSLOduration=5.403860712 podStartE2EDuration="20.826390573s" podCreationTimestamp="2026-01-23 14:21:39 +0000 UTC" firstStartedPulling="2026-01-23 14:21:40.761855313 +0000 UTC m=+1047.756684053" lastFinishedPulling="2026-01-23 14:21:56.184385164 +0000 UTC m=+1063.179213914" observedRunningTime="2026-01-23 14:21:59.809642998 +0000 UTC m=+1066.804471728" watchObservedRunningTime="2026-01-23 14:21:59.826390573 +0000 UTC m=+1066.821219313" Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.834300 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-pfdc5" podStartSLOduration=4.170147865 podStartE2EDuration="20.834280022s" podCreationTimestamp="2026-01-23 14:21:39 +0000 UTC" firstStartedPulling="2026-01-23 14:21:41.093006683 +0000 UTC m=+1048.087835423" lastFinishedPulling="2026-01-23 14:21:57.75713884 +0000 UTC m=+1064.751967580" observedRunningTime="2026-01-23 14:21:59.834149608 +0000 UTC m=+1066.828978348" watchObservedRunningTime="2026-01-23 14:21:59.834280022 +0000 UTC m=+1066.829108762" Jan 23 14:21:59 crc kubenswrapper[4775]: I0123 14:21:59.848467 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sg9x5" podStartSLOduration=3.855596186 podStartE2EDuration="20.848447762s" podCreationTimestamp="2026-01-23 14:21:39 +0000 UTC" firstStartedPulling="2026-01-23 14:21:40.763393028 +0000 UTC m=+1047.758221768" lastFinishedPulling="2026-01-23 14:21:57.756244614 +0000 UTC m=+1064.751073344" observedRunningTime="2026-01-23 14:21:59.847225277 +0000 UTC m=+1066.842054017" watchObservedRunningTime="2026-01-23 14:21:59.848447762 +0000 UTC m=+1066.843276502" Jan 23 14:22:08 crc kubenswrapper[4775]: I0123 14:22:08.677779 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" event={"ID":"44a963d8-d403-42d5-acd2-a0379f07db51","Type":"ContainerStarted","Data":"b5607a5d959ea8494045145fc77e597274b6e96cb462f54a12fe4dd3a0037431"} Jan 23 14:22:08 crc kubenswrapper[4775]: I0123 14:22:08.679394 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xtmz8" event={"ID":"9f9597bf-12a1-4204-ac57-37c4c0189687","Type":"ContainerStarted","Data":"92884eb603386718641648b0b3ba55f6d7ee1b007c867c2164ee00b7546eac3c"} Jan 23 14:22:08 crc kubenswrapper[4775]: I0123 14:22:08.685980 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-mcrj4" event={"ID":"5a65a9ef-28c7-46ae-826d-5546af1103a5","Type":"ContainerStarted","Data":"fb90c1bd8d9c13fccd3766fa6a713944de3a3ddd42a5f2ac7a5c65417ff3b289"} Jan 23 14:22:08 crc kubenswrapper[4775]: I0123 14:22:08.686379 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-mcrj4" Jan 23 14:22:08 crc kubenswrapper[4775]: I0123 14:22:08.687322 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-n4k5s" event={"ID":"072b9a9d-8a08-454c-b1b6-628fcdcc91df","Type":"ContainerStarted","Data":"69ef93c55e932a771a522b113fb28f7cb0884c4fce8910cb2ba02a7d540105f6"} Jan 23 14:22:08 crc kubenswrapper[4775]: I0123 14:22:08.688695 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xst4r" event={"ID":"3d7c7bc6-5124-4cd4-a406-448ca94ba640","Type":"ContainerStarted","Data":"e3e8dae1e41645b52484e850549abfc87ead0f1b0fc18a6afe5f7d8a5b2b7e42"} Jan 23 14:22:08 crc kubenswrapper[4775]: I0123 14:22:08.689631 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-vl7m5" event={"ID":"a07598ff-60cc-482e-a551-af751575709c","Type":"ContainerStarted","Data":"2dfaa1e6313d9ac18f1ecfe6f88daaadbf7fb4098d5b9343a9adb524e6f3eb0b"} Jan 23 14:22:08 crc kubenswrapper[4775]: I0123 14:22:08.710202 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-mcrj4" podStartSLOduration=21.430681764 podStartE2EDuration="29.71017683s" podCreationTimestamp="2026-01-23 14:21:39 +0000 UTC" firstStartedPulling="2026-01-23 14:21:58.902586861 +0000 UTC m=+1065.897415601" lastFinishedPulling="2026-01-23 14:22:07.182081877 +0000 UTC m=+1074.176910667" observedRunningTime="2026-01-23 14:22:08.706502383 +0000 UTC m=+1075.701331123" watchObservedRunningTime="2026-01-23 14:22:08.71017683 +0000 UTC m=+1075.705005600" Jan 23 14:22:09 crc kubenswrapper[4775]: I0123 14:22:09.611573 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-pk9jd" Jan 23 14:22:09 crc kubenswrapper[4775]: I0123 14:22:09.629864 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-69cf5d4557-dz7ft" Jan 23 14:22:09 crc kubenswrapper[4775]: I0123 14:22:09.650732 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-ppxmc" Jan 23 14:22:09 crc kubenswrapper[4775]: I0123 14:22:09.733318 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-xrmvt" Jan 23 14:22:09 crc kubenswrapper[4775]: I0123 14:22:09.764887 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-jq89z" Jan 23 14:22:09 crc kubenswrapper[4775]: I0123 14:22:09.801655 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sg9x5" Jan 23 14:22:09 crc kubenswrapper[4775]: I0123 14:22:09.835538 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-f7lm6" Jan 23 14:22:09 crc kubenswrapper[4775]: I0123 14:22:09.901581 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-pfdc5" Jan 23 14:22:10 crc kubenswrapper[4775]: I0123 14:22:10.021333 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-jk8vg" Jan 23 14:22:10 crc kubenswrapper[4775]: I0123 14:22:10.041668 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-sxkzh" Jan 23 14:22:10 crc kubenswrapper[4775]: I0123 14:22:10.288465 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-nqw74" Jan 23 14:22:10 crc kubenswrapper[4775]: I0123 14:22:10.436946 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6d9458688d-v8dw9" Jan 23 14:22:11 crc kubenswrapper[4775]: I0123 14:22:11.989676 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-metrics-certs\") pod \"openstack-operator-controller-manager-bb8f85db-bkqk9\" (UID: \"313b5382-60cf-4627-8ba7-a091fc457989\") " pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:22:11 crc kubenswrapper[4775]: I0123 14:22:11.989863 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-webhook-certs\") pod \"openstack-operator-controller-manager-bb8f85db-bkqk9\" (UID: \"313b5382-60cf-4627-8ba7-a091fc457989\") " pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:22:11 crc kubenswrapper[4775]: I0123 14:22:11.999112 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-webhook-certs\") pod \"openstack-operator-controller-manager-bb8f85db-bkqk9\" (UID: \"313b5382-60cf-4627-8ba7-a091fc457989\") " pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:22:11 crc kubenswrapper[4775]: I0123 14:22:11.999346 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/313b5382-60cf-4627-8ba7-a091fc457989-metrics-certs\") pod \"openstack-operator-controller-manager-bb8f85db-bkqk9\" (UID: \"313b5382-60cf-4627-8ba7-a091fc457989\") " pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:22:12 crc kubenswrapper[4775]: I0123 14:22:12.034954 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:22:12 crc kubenswrapper[4775]: I0123 14:22:12.562509 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9"] Jan 23 14:22:12 crc kubenswrapper[4775]: I0123 14:22:12.727119 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" event={"ID":"313b5382-60cf-4627-8ba7-a091fc457989","Type":"ContainerStarted","Data":"d3b66bc76162b055f5be811ae8e63916a7a92a5f79e785b80deebab7d94ea605"} Jan 23 14:22:15 crc kubenswrapper[4775]: I0123 14:22:15.415468 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-58749ffdfb-mcrj4" Jan 23 14:22:16 crc kubenswrapper[4775]: I0123 14:22:16.777607 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xst4r" Jan 23 14:22:16 crc kubenswrapper[4775]: I0123 14:22:16.778271 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-n4k5s" Jan 23 14:22:16 crc kubenswrapper[4775]: I0123 14:22:16.780221 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-n4k5s" Jan 23 14:22:16 crc kubenswrapper[4775]: I0123 14:22:16.781351 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xst4r" Jan 23 14:22:16 crc kubenswrapper[4775]: I0123 14:22:16.804590 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5d646b7d76-n4k5s" podStartSLOduration=12.276171169 podStartE2EDuration="37.804567176s" podCreationTimestamp="2026-01-23 14:21:39 +0000 UTC" firstStartedPulling="2026-01-23 14:21:41.286131026 +0000 UTC m=+1048.280959766" lastFinishedPulling="2026-01-23 14:22:06.814527033 +0000 UTC m=+1073.809355773" observedRunningTime="2026-01-23 14:22:16.796158972 +0000 UTC m=+1083.790987752" watchObservedRunningTime="2026-01-23 14:22:16.804567176 +0000 UTC m=+1083.799395926" Jan 23 14:22:16 crc kubenswrapper[4775]: I0123 14:22:16.815012 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-vl7m5" podStartSLOduration=14.1443807 podStartE2EDuration="37.814985457s" podCreationTimestamp="2026-01-23 14:21:39 +0000 UTC" firstStartedPulling="2026-01-23 14:21:41.302424258 +0000 UTC m=+1048.297252988" lastFinishedPulling="2026-01-23 14:22:04.973028965 +0000 UTC m=+1071.967857745" observedRunningTime="2026-01-23 14:22:16.812221307 +0000 UTC m=+1083.807050097" watchObservedRunningTime="2026-01-23 14:22:16.814985457 +0000 UTC m=+1083.809814237" Jan 23 14:22:16 crc kubenswrapper[4775]: I0123 14:22:16.832619 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-xst4r" podStartSLOduration=11.932190107 podStartE2EDuration="37.832601387s" podCreationTimestamp="2026-01-23 14:21:39 +0000 UTC" firstStartedPulling="2026-01-23 14:21:41.286933819 +0000 UTC m=+1048.281762559" lastFinishedPulling="2026-01-23 14:22:07.187345069 +0000 UTC m=+1074.182173839" observedRunningTime="2026-01-23 14:22:16.830190308 +0000 UTC m=+1083.825019088" watchObservedRunningTime="2026-01-23 14:22:16.832601387 +0000 UTC m=+1083.827430137" Jan 23 14:22:16 crc kubenswrapper[4775]: I0123 14:22:16.849531 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xtmz8" podStartSLOduration=11.966527291 podStartE2EDuration="37.849512027s" podCreationTimestamp="2026-01-23 14:21:39 +0000 UTC" firstStartedPulling="2026-01-23 14:21:41.297701881 +0000 UTC m=+1048.292530621" lastFinishedPulling="2026-01-23 14:22:07.180686587 +0000 UTC m=+1074.175515357" observedRunningTime="2026-01-23 14:22:16.847139438 +0000 UTC m=+1083.841968178" watchObservedRunningTime="2026-01-23 14:22:16.849512027 +0000 UTC m=+1083.844340777" Jan 23 14:22:17 crc kubenswrapper[4775]: I0123 14:22:17.785070 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" Jan 23 14:22:17 crc kubenswrapper[4775]: I0123 14:22:17.795183 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" Jan 23 14:22:17 crc kubenswrapper[4775]: I0123 14:22:17.827011 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854zk48c" podStartSLOduration=30.525009426 podStartE2EDuration="38.826985323s" podCreationTimestamp="2026-01-23 14:21:39 +0000 UTC" firstStartedPulling="2026-01-23 14:21:58.923217779 +0000 UTC m=+1065.918046519" lastFinishedPulling="2026-01-23 14:22:07.225193666 +0000 UTC m=+1074.220022416" observedRunningTime="2026-01-23 14:22:17.821476514 +0000 UTC m=+1084.816305294" watchObservedRunningTime="2026-01-23 14:22:17.826985323 +0000 UTC m=+1084.821814103" Jan 23 14:22:20 crc kubenswrapper[4775]: I0123 14:22:20.104767 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xtmz8" Jan 23 14:22:20 crc kubenswrapper[4775]: I0123 14:22:20.109929 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xtmz8" Jan 23 14:22:20 crc kubenswrapper[4775]: I0123 14:22:20.158482 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-vl7m5" Jan 23 14:22:20 crc kubenswrapper[4775]: I0123 14:22:20.162044 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7bd9774b6-vl7m5" Jan 23 14:22:22 crc kubenswrapper[4775]: I0123 14:22:22.824340 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" event={"ID":"313b5382-60cf-4627-8ba7-a091fc457989","Type":"ContainerStarted","Data":"45eeb3bc3fb6584943a2df57f12324ee8c36534129f85f7e57654aa0b142ab49"} Jan 23 14:22:22 crc kubenswrapper[4775]: I0123 14:22:22.824706 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:22:22 crc kubenswrapper[4775]: I0123 14:22:22.826035 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-d9495b985-k98mk" event={"ID":"9bad88d6-5ca9-4176-904d-72b793e1361e","Type":"ContainerStarted","Data":"e73b6eeb014674539aea8fd7195079debeadbaa135e4e4e1baacaed853f9a774"} Jan 23 14:22:22 crc kubenswrapper[4775]: I0123 14:22:22.826454 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-d9495b985-k98mk" Jan 23 14:22:22 crc kubenswrapper[4775]: I0123 14:22:22.827746 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2lhsf" event={"ID":"f9da51f1-a035-44b8-9391-0d6018a84c61","Type":"ContainerStarted","Data":"806e8aecea719f6e700353416c729b360dfa041467a7209c9d2bb8907b9ae312"} Jan 23 14:22:22 crc kubenswrapper[4775]: I0123 14:22:22.830574 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bgbpj" event={"ID":"0784c928-e0c5-4afb-99cb-4f1f96820a14","Type":"ContainerStarted","Data":"e169751a6f6db310d8818ff117c620bf08fa6be49c30a4cab5e099963e416b30"} Jan 23 14:22:22 crc kubenswrapper[4775]: I0123 14:22:22.830973 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bgbpj" Jan 23 14:22:22 crc kubenswrapper[4775]: I0123 14:22:22.832822 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-jrhlh" event={"ID":"91da96b4-921a-4b88-9804-55745989e08b","Type":"ContainerStarted","Data":"906e3088174e265e5243cc6871b9f7408a37073496c495427f28cebdbcb04706"} Jan 23 14:22:22 crc kubenswrapper[4775]: I0123 14:22:22.833036 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-jrhlh" Jan 23 14:22:22 crc kubenswrapper[4775]: I0123 14:22:22.859862 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" podStartSLOduration=43.8598392 podStartE2EDuration="43.8598392s" podCreationTimestamp="2026-01-23 14:21:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:22:22.855602997 +0000 UTC m=+1089.850431737" watchObservedRunningTime="2026-01-23 14:22:22.8598392 +0000 UTC m=+1089.854667940" Jan 23 14:22:22 crc kubenswrapper[4775]: I0123 14:22:22.877142 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-jrhlh" podStartSLOduration=3.159022954 podStartE2EDuration="43.877126941s" podCreationTimestamp="2026-01-23 14:21:39 +0000 UTC" firstStartedPulling="2026-01-23 14:21:41.265354864 +0000 UTC m=+1048.260183614" lastFinishedPulling="2026-01-23 14:22:21.983458841 +0000 UTC m=+1088.978287601" observedRunningTime="2026-01-23 14:22:22.872010503 +0000 UTC m=+1089.866839243" watchObservedRunningTime="2026-01-23 14:22:22.877126941 +0000 UTC m=+1089.871955681" Jan 23 14:22:22 crc kubenswrapper[4775]: I0123 14:22:22.893363 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2lhsf" podStartSLOduration=2.295776203 podStartE2EDuration="42.89334594s" podCreationTimestamp="2026-01-23 14:21:40 +0000 UTC" firstStartedPulling="2026-01-23 14:21:41.385726639 +0000 UTC m=+1048.380555379" lastFinishedPulling="2026-01-23 14:22:21.983296356 +0000 UTC m=+1088.978125116" observedRunningTime="2026-01-23 14:22:22.89193872 +0000 UTC m=+1089.886767510" watchObservedRunningTime="2026-01-23 14:22:22.89334594 +0000 UTC m=+1089.888174680" Jan 23 14:22:22 crc kubenswrapper[4775]: I0123 14:22:22.912345 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bgbpj" podStartSLOduration=2.816980289 podStartE2EDuration="43.91232997s" podCreationTimestamp="2026-01-23 14:21:39 +0000 UTC" firstStartedPulling="2026-01-23 14:21:40.886581045 +0000 UTC m=+1047.881409785" lastFinishedPulling="2026-01-23 14:22:21.981930706 +0000 UTC m=+1088.976759466" observedRunningTime="2026-01-23 14:22:22.911692772 +0000 UTC m=+1089.906521542" watchObservedRunningTime="2026-01-23 14:22:22.91232997 +0000 UTC m=+1089.907158710" Jan 23 14:22:22 crc kubenswrapper[4775]: I0123 14:22:22.931009 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-d9495b985-k98mk" podStartSLOduration=3.041341456 podStartE2EDuration="43.930990891s" podCreationTimestamp="2026-01-23 14:21:39 +0000 UTC" firstStartedPulling="2026-01-23 14:21:41.092638952 +0000 UTC m=+1048.087467692" lastFinishedPulling="2026-01-23 14:22:21.982288367 +0000 UTC m=+1088.977117127" observedRunningTime="2026-01-23 14:22:22.925978285 +0000 UTC m=+1089.920807025" watchObservedRunningTime="2026-01-23 14:22:22.930990891 +0000 UTC m=+1089.925819641" Jan 23 14:22:23 crc kubenswrapper[4775]: I0123 14:22:23.218698 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:22:23 crc kubenswrapper[4775]: I0123 14:22:23.218784 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:22:29 crc kubenswrapper[4775]: I0123 14:22:29.888562 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-bgbpj" Jan 23 14:22:30 crc kubenswrapper[4775]: I0123 14:22:30.138603 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-d9495b985-k98mk" Jan 23 14:22:30 crc kubenswrapper[4775]: I0123 14:22:30.206885 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-jrhlh" Jan 23 14:22:32 crc kubenswrapper[4775]: I0123 14:22:32.042868 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-bb8f85db-bkqk9" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.606940 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/rabbitmq-server-0"] Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.610921 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.613056 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-plugins-conf" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.613956 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-default-user" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.614047 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-server-conf" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.614079 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"kube-root-ca.crt" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.614163 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openshift-service-ca.crt" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.614285 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-erlang-cookie" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.614358 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-server-dockercfg-88xgt" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.623619 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-server-0"] Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.722443 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.722486 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.722525 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6nvw\" (UniqueName: \"kubernetes.io/projected/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-kube-api-access-q6nvw\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.722567 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.722614 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.722666 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.722695 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.722715 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.722769 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-06f05806-3448-44a1-9675-136131ab3921\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-06f05806-3448-44a1-9675-136131ab3921\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.824233 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.824363 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.824442 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.824491 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.824530 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.824681 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-06f05806-3448-44a1-9675-136131ab3921\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-06f05806-3448-44a1-9675-136131ab3921\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.824724 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.824789 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.824913 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6nvw\" (UniqueName: \"kubernetes.io/projected/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-kube-api-access-q6nvw\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.826144 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.826341 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.826775 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-server-conf\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.829616 4775 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.829679 4775 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-06f05806-3448-44a1-9675-136131ab3921\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-06f05806-3448-44a1-9675-136131ab3921\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/387dfcf2d9ab5c99f04714504985073c98a182156e32a8a18238fa00e934eb7b/globalmount\"" pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.830605 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.832863 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-pod-info\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.833260 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.842227 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.844745 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6nvw\" (UniqueName: \"kubernetes.io/projected/70288c27-7f95-4843-a8fb-f2ac58ea8e1f-kube-api-access-q6nvw\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.880839 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-06f05806-3448-44a1-9675-136131ab3921\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-06f05806-3448-44a1-9675-136131ab3921\") pod \"rabbitmq-server-0\" (UID: \"70288c27-7f95-4843-a8fb-f2ac58ea8e1f\") " pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.898955 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/rabbitmq-broadcaster-server-0"] Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.900367 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.903373 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-broadcaster-default-user" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.903426 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-broadcaster-server-conf" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.903538 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-broadcaster-server-dockercfg-tvlpb" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.903735 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-broadcaster-plugins-conf" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.903768 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-broadcaster-erlang-cookie" Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.925070 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-broadcaster-server-0"] Jan 23 14:22:42 crc kubenswrapper[4775]: I0123 14:22:42.951833 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.027339 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/401a94b6-0628-4cea-b62a-c3229a913d16-erlang-cookie-secret\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.027725 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/401a94b6-0628-4cea-b62a-c3229a913d16-rabbitmq-confd\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.027767 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4b2e1233-3507-4076-a25f-98bbbbd64408\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b2e1233-3507-4076-a25f-98bbbbd64408\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.027856 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/401a94b6-0628-4cea-b62a-c3229a913d16-pod-info\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.027890 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/401a94b6-0628-4cea-b62a-c3229a913d16-plugins-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.027911 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/401a94b6-0628-4cea-b62a-c3229a913d16-rabbitmq-plugins\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.027967 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/401a94b6-0628-4cea-b62a-c3229a913d16-rabbitmq-erlang-cookie\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.028052 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb2gs\" (UniqueName: \"kubernetes.io/projected/401a94b6-0628-4cea-b62a-c3229a913d16-kube-api-access-tb2gs\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.028075 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/401a94b6-0628-4cea-b62a-c3229a913d16-server-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.094587 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/openstack-galera-0"] Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.096185 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.100490 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"galera-openstack-dockercfg-cr4k4" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.100920 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"cert-galera-openstack-svc" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.101129 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-config-data" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.104279 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-scripts" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.109639 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstack-galera-0"] Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.114173 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"combined-ca-bundle" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.129629 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4b2e1233-3507-4076-a25f-98bbbbd64408\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b2e1233-3507-4076-a25f-98bbbbd64408\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.129684 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/372c512d-5894-49da-ae1e-cb3e54aadacc-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.129710 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e595fd86-adf5-4556-9fff-92a693b79368\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e595fd86-adf5-4556-9fff-92a693b79368\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.129739 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/401a94b6-0628-4cea-b62a-c3229a913d16-pod-info\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.129765 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/372c512d-5894-49da-ae1e-cb3e54aadacc-config-data-default\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.129786 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/401a94b6-0628-4cea-b62a-c3229a913d16-plugins-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.129823 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/401a94b6-0628-4cea-b62a-c3229a913d16-rabbitmq-plugins\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.129858 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh9sh\" (UniqueName: \"kubernetes.io/projected/372c512d-5894-49da-ae1e-cb3e54aadacc-kube-api-access-vh9sh\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.129878 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/372c512d-5894-49da-ae1e-cb3e54aadacc-operator-scripts\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.129905 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/372c512d-5894-49da-ae1e-cb3e54aadacc-config-data-generated\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.129929 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/372c512d-5894-49da-ae1e-cb3e54aadacc-kolla-config\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.129959 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/401a94b6-0628-4cea-b62a-c3229a913d16-rabbitmq-erlang-cookie\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.129995 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/401a94b6-0628-4cea-b62a-c3229a913d16-server-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.130015 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb2gs\" (UniqueName: \"kubernetes.io/projected/401a94b6-0628-4cea-b62a-c3229a913d16-kube-api-access-tb2gs\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.130042 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/372c512d-5894-49da-ae1e-cb3e54aadacc-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.130082 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/401a94b6-0628-4cea-b62a-c3229a913d16-erlang-cookie-secret\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.130104 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/401a94b6-0628-4cea-b62a-c3229a913d16-rabbitmq-confd\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.131234 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/401a94b6-0628-4cea-b62a-c3229a913d16-rabbitmq-erlang-cookie\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.131687 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/401a94b6-0628-4cea-b62a-c3229a913d16-rabbitmq-plugins\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.131978 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/401a94b6-0628-4cea-b62a-c3229a913d16-plugins-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.132972 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/401a94b6-0628-4cea-b62a-c3229a913d16-server-conf\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.134699 4775 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.134720 4775 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4b2e1233-3507-4076-a25f-98bbbbd64408\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b2e1233-3507-4076-a25f-98bbbbd64408\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b8aeee34c9457791b5054d3c85b310ed049d8dc36b23beea77ad5efef6c10870/globalmount\"" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.135198 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/401a94b6-0628-4cea-b62a-c3229a913d16-rabbitmq-confd\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.151669 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/401a94b6-0628-4cea-b62a-c3229a913d16-pod-info\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.155642 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/401a94b6-0628-4cea-b62a-c3229a913d16-erlang-cookie-secret\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.160915 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb2gs\" (UniqueName: \"kubernetes.io/projected/401a94b6-0628-4cea-b62a-c3229a913d16-kube-api-access-tb2gs\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.174915 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4b2e1233-3507-4076-a25f-98bbbbd64408\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b2e1233-3507-4076-a25f-98bbbbd64408\") pod \"rabbitmq-broadcaster-server-0\" (UID: \"401a94b6-0628-4cea-b62a-c3229a913d16\") " pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.221867 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.229602 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/rabbitmq-cell1-server-0"] Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.230590 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/372c512d-5894-49da-ae1e-cb3e54aadacc-config-data-default\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.230636 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/372c512d-5894-49da-ae1e-cb3e54aadacc-operator-scripts\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.230657 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh9sh\" (UniqueName: \"kubernetes.io/projected/372c512d-5894-49da-ae1e-cb3e54aadacc-kube-api-access-vh9sh\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.230682 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/372c512d-5894-49da-ae1e-cb3e54aadacc-config-data-generated\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.230697 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/372c512d-5894-49da-ae1e-cb3e54aadacc-kolla-config\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.230733 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/372c512d-5894-49da-ae1e-cb3e54aadacc-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.230777 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/372c512d-5894-49da-ae1e-cb3e54aadacc-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.230814 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e595fd86-adf5-4556-9fff-92a693b79368\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e595fd86-adf5-4556-9fff-92a693b79368\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.231128 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.231913 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/372c512d-5894-49da-ae1e-cb3e54aadacc-config-data-default\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.232962 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/372c512d-5894-49da-ae1e-cb3e54aadacc-operator-scripts\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.238881 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/372c512d-5894-49da-ae1e-cb3e54aadacc-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.242017 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/372c512d-5894-49da-ae1e-cb3e54aadacc-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.242290 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/372c512d-5894-49da-ae1e-cb3e54aadacc-kolla-config\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.242571 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-cell1-server-dockercfg-zlrt7" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.242874 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-cell1-server-conf" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.243022 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-cell1-erlang-cookie" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.243147 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"rabbitmq-cell1-plugins-conf" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.243256 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"rabbitmq-cell1-default-user" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.244183 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/372c512d-5894-49da-ae1e-cb3e54aadacc-config-data-generated\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.256974 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-cell1-server-0"] Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.263915 4775 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.263970 4775 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e595fd86-adf5-4556-9fff-92a693b79368\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e595fd86-adf5-4556-9fff-92a693b79368\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8bfe8abef1b31634e899a0e76673a6d30481acfe91057b467659b7666645fb84/globalmount\"" pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.264154 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-server-0"] Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.270708 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh9sh\" (UniqueName: \"kubernetes.io/projected/372c512d-5894-49da-ae1e-cb3e54aadacc-kube-api-access-vh9sh\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.275428 4775 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.301742 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e595fd86-adf5-4556-9fff-92a693b79368\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e595fd86-adf5-4556-9fff-92a693b79368\") pod \"openstack-galera-0\" (UID: \"372c512d-5894-49da-ae1e-cb3e54aadacc\") " pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.401764 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/memcached-0"] Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.402580 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/memcached-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.409683 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"memcached-memcached-dockercfg-n8szm" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.409940 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"memcached-config-data" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.418457 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/memcached-0"] Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.422626 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.449651 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4b05c189-a694-4cbc-b679-a974e6bf99bc-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.449713 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4b05c189-a694-4cbc-b679-a974e6bf99bc-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.449740 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5fv2\" (UniqueName: \"kubernetes.io/projected/4b05c189-a694-4cbc-b679-a974e6bf99bc-kube-api-access-r5fv2\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.449759 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4b05c189-a694-4cbc-b679-a974e6bf99bc-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.449778 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-52481726-f20d-47e3-96bb-73eb990ded39\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-52481726-f20d-47e3-96bb-73eb990ded39\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.449813 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4b05c189-a694-4cbc-b679-a974e6bf99bc-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.449828 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4b05c189-a694-4cbc-b679-a974e6bf99bc-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.449846 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4b05c189-a694-4cbc-b679-a974e6bf99bc-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.449859 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2e1f7aa1-1780-4ccb-b1a5-66b9b279d555-config-data\") pod \"memcached-0\" (UID: \"2e1f7aa1-1780-4ccb-b1a5-66b9b279d555\") " pod="nova-kuttl-default/memcached-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.449881 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97w92\" (UniqueName: \"kubernetes.io/projected/2e1f7aa1-1780-4ccb-b1a5-66b9b279d555-kube-api-access-97w92\") pod \"memcached-0\" (UID: \"2e1f7aa1-1780-4ccb-b1a5-66b9b279d555\") " pod="nova-kuttl-default/memcached-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.449894 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2e1f7aa1-1780-4ccb-b1a5-66b9b279d555-kolla-config\") pod \"memcached-0\" (UID: \"2e1f7aa1-1780-4ccb-b1a5-66b9b279d555\") " pod="nova-kuttl-default/memcached-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.449930 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4b05c189-a694-4cbc-b679-a974e6bf99bc-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.550548 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4b05c189-a694-4cbc-b679-a974e6bf99bc-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.550592 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4b05c189-a694-4cbc-b679-a974e6bf99bc-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.550628 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4b05c189-a694-4cbc-b679-a974e6bf99bc-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.550652 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5fv2\" (UniqueName: \"kubernetes.io/projected/4b05c189-a694-4cbc-b679-a974e6bf99bc-kube-api-access-r5fv2\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.550672 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4b05c189-a694-4cbc-b679-a974e6bf99bc-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.550692 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-52481726-f20d-47e3-96bb-73eb990ded39\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-52481726-f20d-47e3-96bb-73eb990ded39\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.550712 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4b05c189-a694-4cbc-b679-a974e6bf99bc-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.550728 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4b05c189-a694-4cbc-b679-a974e6bf99bc-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.550744 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4b05c189-a694-4cbc-b679-a974e6bf99bc-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.550760 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2e1f7aa1-1780-4ccb-b1a5-66b9b279d555-config-data\") pod \"memcached-0\" (UID: \"2e1f7aa1-1780-4ccb-b1a5-66b9b279d555\") " pod="nova-kuttl-default/memcached-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.550785 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97w92\" (UniqueName: \"kubernetes.io/projected/2e1f7aa1-1780-4ccb-b1a5-66b9b279d555-kube-api-access-97w92\") pod \"memcached-0\" (UID: \"2e1f7aa1-1780-4ccb-b1a5-66b9b279d555\") " pod="nova-kuttl-default/memcached-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.550813 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2e1f7aa1-1780-4ccb-b1a5-66b9b279d555-kolla-config\") pod \"memcached-0\" (UID: \"2e1f7aa1-1780-4ccb-b1a5-66b9b279d555\") " pod="nova-kuttl-default/memcached-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.551720 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/2e1f7aa1-1780-4ccb-b1a5-66b9b279d555-kolla-config\") pod \"memcached-0\" (UID: \"2e1f7aa1-1780-4ccb-b1a5-66b9b279d555\") " pod="nova-kuttl-default/memcached-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.551942 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4b05c189-a694-4cbc-b679-a974e6bf99bc-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.552213 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4b05c189-a694-4cbc-b679-a974e6bf99bc-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.552243 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4b05c189-a694-4cbc-b679-a974e6bf99bc-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.556587 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2e1f7aa1-1780-4ccb-b1a5-66b9b279d555-config-data\") pod \"memcached-0\" (UID: \"2e1f7aa1-1780-4ccb-b1a5-66b9b279d555\") " pod="nova-kuttl-default/memcached-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.556961 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4b05c189-a694-4cbc-b679-a974e6bf99bc-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.558064 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4b05c189-a694-4cbc-b679-a974e6bf99bc-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.558334 4775 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.558373 4775 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-52481726-f20d-47e3-96bb-73eb990ded39\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-52481726-f20d-47e3-96bb-73eb990ded39\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/625e5d6e98a1815fffeff764585e01bbe4bc815f98a1e4d18fbfd842578c912b/globalmount\"" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.562021 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4b05c189-a694-4cbc-b679-a974e6bf99bc-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.564523 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4b05c189-a694-4cbc-b679-a974e6bf99bc-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.580619 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97w92\" (UniqueName: \"kubernetes.io/projected/2e1f7aa1-1780-4ccb-b1a5-66b9b279d555-kube-api-access-97w92\") pod \"memcached-0\" (UID: \"2e1f7aa1-1780-4ccb-b1a5-66b9b279d555\") " pod="nova-kuttl-default/memcached-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.584533 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5fv2\" (UniqueName: \"kubernetes.io/projected/4b05c189-a694-4cbc-b679-a974e6bf99bc-kube-api-access-r5fv2\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.610277 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-52481726-f20d-47e3-96bb-73eb990ded39\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-52481726-f20d-47e3-96bb-73eb990ded39\") pod \"rabbitmq-cell1-server-0\" (UID: \"4b05c189-a694-4cbc-b679-a974e6bf99bc\") " pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.718286 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/memcached-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.760389 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-broadcaster-server-0"] Jan 23 14:22:43 crc kubenswrapper[4775]: W0123 14:22:43.772569 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod401a94b6_0628_4cea_b62a_c3229a913d16.slice/crio-c263b35ff639d1e267d788184045806470ad44d3f6ff110e43eef65c8168a798 WatchSource:0}: Error finding container c263b35ff639d1e267d788184045806470ad44d3f6ff110e43eef65c8168a798: Status 404 returned error can't find the container with id c263b35ff639d1e267d788184045806470ad44d3f6ff110e43eef65c8168a798 Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.872922 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:22:43 crc kubenswrapper[4775]: I0123 14:22:43.904821 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstack-galera-0"] Jan 23 14:22:43 crc kubenswrapper[4775]: W0123 14:22:43.912455 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod372c512d_5894_49da_ae1e_cb3e54aadacc.slice/crio-7f0592b7f7729bcd53da8c27c4b2b6922957d515cfa1449f4ab4e1e26d7d4ffc WatchSource:0}: Error finding container 7f0592b7f7729bcd53da8c27c4b2b6922957d515cfa1449f4ab4e1e26d7d4ffc: Status 404 returned error can't find the container with id 7f0592b7f7729bcd53da8c27c4b2b6922957d515cfa1449f4ab4e1e26d7d4ffc Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.031193 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-galera-0" event={"ID":"372c512d-5894-49da-ae1e-cb3e54aadacc","Type":"ContainerStarted","Data":"7f0592b7f7729bcd53da8c27c4b2b6922957d515cfa1449f4ab4e1e26d7d4ffc"} Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.032239 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-server-0" event={"ID":"70288c27-7f95-4843-a8fb-f2ac58ea8e1f","Type":"ContainerStarted","Data":"681b7a688d4f04046470e3a96437a041c22fa95e2914d20b3a8b6ffbf246ee9d"} Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.033109 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" event={"ID":"401a94b6-0628-4cea-b62a-c3229a913d16","Type":"ContainerStarted","Data":"c263b35ff639d1e267d788184045806470ad44d3f6ff110e43eef65c8168a798"} Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.144918 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/memcached-0"] Jan 23 14:22:44 crc kubenswrapper[4775]: W0123 14:22:44.149010 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e1f7aa1_1780_4ccb_b1a5_66b9b279d555.slice/crio-f97e0016988c3904475e08cd0b2e338c4455dbbc7a7a7709e4a8925657685db4 WatchSource:0}: Error finding container f97e0016988c3904475e08cd0b2e338c4455dbbc7a7a7709e4a8925657685db4: Status 404 returned error can't find the container with id f97e0016988c3904475e08cd0b2e338c4455dbbc7a7a7709e4a8925657685db4 Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.307190 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/rabbitmq-cell1-server-0"] Jan 23 14:22:44 crc kubenswrapper[4775]: W0123 14:22:44.317889 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b05c189_a694_4cbc_b679_a974e6bf99bc.slice/crio-936053d0b19282a7ef8fac8691fc7f6ba9e7e72ffd151fdf6125986f578e9da5 WatchSource:0}: Error finding container 936053d0b19282a7ef8fac8691fc7f6ba9e7e72ffd151fdf6125986f578e9da5: Status 404 returned error can't find the container with id 936053d0b19282a7ef8fac8691fc7f6ba9e7e72ffd151fdf6125986f578e9da5 Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.519985 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/openstack-cell1-galera-0"] Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.528052 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.533665 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"cert-galera-openstack-cell1-svc" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.533986 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-cell1-scripts" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.534097 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"galera-openstack-cell1-dockercfg-clnx5" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.538764 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-cell1-config-data" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.552721 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstack-cell1-galera-0"] Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.666781 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b09e829b-6f38-42c2-b363-ef7971d763f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b09e829b-6f38-42c2-b363-ef7971d763f6\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.666927 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/481cbe1b-2796-4ad2-a342-3661afa62383-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.666994 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/481cbe1b-2796-4ad2-a342-3661afa62383-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.667049 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/481cbe1b-2796-4ad2-a342-3661afa62383-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.667126 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/481cbe1b-2796-4ad2-a342-3661afa62383-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.667175 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pprk\" (UniqueName: \"kubernetes.io/projected/481cbe1b-2796-4ad2-a342-3661afa62383-kube-api-access-9pprk\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.667211 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/481cbe1b-2796-4ad2-a342-3661afa62383-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.667258 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/481cbe1b-2796-4ad2-a342-3661afa62383-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.768818 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/481cbe1b-2796-4ad2-a342-3661afa62383-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.768886 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/481cbe1b-2796-4ad2-a342-3661afa62383-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.768915 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/481cbe1b-2796-4ad2-a342-3661afa62383-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.768961 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/481cbe1b-2796-4ad2-a342-3661afa62383-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.768991 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pprk\" (UniqueName: \"kubernetes.io/projected/481cbe1b-2796-4ad2-a342-3661afa62383-kube-api-access-9pprk\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.769016 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/481cbe1b-2796-4ad2-a342-3661afa62383-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.769047 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/481cbe1b-2796-4ad2-a342-3661afa62383-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.769102 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b09e829b-6f38-42c2-b363-ef7971d763f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b09e829b-6f38-42c2-b363-ef7971d763f6\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.770470 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/481cbe1b-2796-4ad2-a342-3661afa62383-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.770550 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/481cbe1b-2796-4ad2-a342-3661afa62383-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.770985 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/481cbe1b-2796-4ad2-a342-3661afa62383-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.772965 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/481cbe1b-2796-4ad2-a342-3661afa62383-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.774419 4775 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.774473 4775 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b09e829b-6f38-42c2-b363-ef7971d763f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b09e829b-6f38-42c2-b363-ef7971d763f6\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/39a3b2bcc0c98acb393397741892ba686c2627f8ce2bbec98169f9c6b68efb3a/globalmount\"" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.775471 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/481cbe1b-2796-4ad2-a342-3661afa62383-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.777503 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/481cbe1b-2796-4ad2-a342-3661afa62383-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.791025 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pprk\" (UniqueName: \"kubernetes.io/projected/481cbe1b-2796-4ad2-a342-3661afa62383-kube-api-access-9pprk\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.809216 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b09e829b-6f38-42c2-b363-ef7971d763f6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b09e829b-6f38-42c2-b363-ef7971d763f6\") pod \"openstack-cell1-galera-0\" (UID: \"481cbe1b-2796-4ad2-a342-3661afa62383\") " pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:44 crc kubenswrapper[4775]: I0123 14:22:44.854500 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:22:45 crc kubenswrapper[4775]: I0123 14:22:45.107131 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/memcached-0" event={"ID":"2e1f7aa1-1780-4ccb-b1a5-66b9b279d555","Type":"ContainerStarted","Data":"f97e0016988c3904475e08cd0b2e338c4455dbbc7a7a7709e4a8925657685db4"} Jan 23 14:22:45 crc kubenswrapper[4775]: I0123 14:22:45.123430 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-cell1-server-0" event={"ID":"4b05c189-a694-4cbc-b679-a974e6bf99bc","Type":"ContainerStarted","Data":"936053d0b19282a7ef8fac8691fc7f6ba9e7e72ffd151fdf6125986f578e9da5"} Jan 23 14:22:45 crc kubenswrapper[4775]: I0123 14:22:45.441016 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstack-cell1-galera-0"] Jan 23 14:22:46 crc kubenswrapper[4775]: I0123 14:22:46.136785 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-cell1-galera-0" event={"ID":"481cbe1b-2796-4ad2-a342-3661afa62383","Type":"ContainerStarted","Data":"806899daf9f516e2ac0cf4380290e79c972a42cbdb37df750944114a549d2e34"} Jan 23 14:22:53 crc kubenswrapper[4775]: I0123 14:22:53.220317 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:22:53 crc kubenswrapper[4775]: I0123 14:22:53.220948 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:23:00 crc kubenswrapper[4775]: E0123 14:23:00.361862 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 23 14:23:00 crc kubenswrapper[4775]: E0123 14:23:00.362600 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vh9sh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_nova-kuttl-default(372c512d-5894-49da-ae1e-cb3e54aadacc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 14:23:00 crc kubenswrapper[4775]: E0123 14:23:00.363886 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="nova-kuttl-default/openstack-galera-0" podUID="372c512d-5894-49da-ae1e-cb3e54aadacc" Jan 23 14:23:00 crc kubenswrapper[4775]: E0123 14:23:00.856042 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Jan 23 14:23:00 crc kubenswrapper[4775]: E0123 14:23:00.856947 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9pprk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_nova-kuttl-default(481cbe1b-2796-4ad2-a342-3661afa62383): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 14:23:00 crc kubenswrapper[4775]: E0123 14:23:00.858610 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="nova-kuttl-default/openstack-cell1-galera-0" podUID="481cbe1b-2796-4ad2-a342-3661afa62383" Jan 23 14:23:00 crc kubenswrapper[4775]: E0123 14:23:00.908957 4775 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 23 14:23:00 crc kubenswrapper[4775]: E0123 14:23:00.909474 4775 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q6nvw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000710000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_nova-kuttl-default(70288c27-7f95-4843-a8fb-f2ac58ea8e1f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 23 14:23:00 crc kubenswrapper[4775]: E0123 14:23:00.910822 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="nova-kuttl-default/rabbitmq-server-0" podUID="70288c27-7f95-4843-a8fb-f2ac58ea8e1f" Jan 23 14:23:01 crc kubenswrapper[4775]: I0123 14:23:01.263206 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/memcached-0" event={"ID":"2e1f7aa1-1780-4ccb-b1a5-66b9b279d555","Type":"ContainerStarted","Data":"de03829a0d3a13c8fdc5349b7411f0d4b6c906ba5769d37486b0147c5d6ff421"} Jan 23 14:23:01 crc kubenswrapper[4775]: E0123 14:23:01.265851 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="nova-kuttl-default/openstack-cell1-galera-0" podUID="481cbe1b-2796-4ad2-a342-3661afa62383" Jan 23 14:23:01 crc kubenswrapper[4775]: E0123 14:23:01.265863 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="nova-kuttl-default/openstack-galera-0" podUID="372c512d-5894-49da-ae1e-cb3e54aadacc" Jan 23 14:23:01 crc kubenswrapper[4775]: I0123 14:23:01.346906 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/memcached-0" podStartSLOduration=1.665656536 podStartE2EDuration="18.346795916s" podCreationTimestamp="2026-01-23 14:22:43 +0000 UTC" firstStartedPulling="2026-01-23 14:22:44.150509037 +0000 UTC m=+1111.145337797" lastFinishedPulling="2026-01-23 14:23:00.831648447 +0000 UTC m=+1127.826477177" observedRunningTime="2026-01-23 14:23:01.339404412 +0000 UTC m=+1128.334233202" watchObservedRunningTime="2026-01-23 14:23:01.346795916 +0000 UTC m=+1128.341624676" Jan 23 14:23:02 crc kubenswrapper[4775]: I0123 14:23:02.273270 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/memcached-0" Jan 23 14:23:03 crc kubenswrapper[4775]: I0123 14:23:03.283117 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-server-0" event={"ID":"70288c27-7f95-4843-a8fb-f2ac58ea8e1f","Type":"ContainerStarted","Data":"2eefbe509e8194211bd62dec0bf3bff4e146f4a7f14ae1ac4ad0df7edbe56abb"} Jan 23 14:23:03 crc kubenswrapper[4775]: I0123 14:23:03.286235 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" event={"ID":"401a94b6-0628-4cea-b62a-c3229a913d16","Type":"ContainerStarted","Data":"951381f236bf6d19ffe6bb0736f765d793795cae85a27156e7dda6bfa98ec1bb"} Jan 23 14:23:03 crc kubenswrapper[4775]: I0123 14:23:03.293367 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-cell1-server-0" event={"ID":"4b05c189-a694-4cbc-b679-a974e6bf99bc","Type":"ContainerStarted","Data":"67185d02961f666b77208d2d95b7f2da17886cf0f7543d1ab63c0d9e0e7ad316"} Jan 23 14:23:08 crc kubenswrapper[4775]: I0123 14:23:08.719893 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/memcached-0" Jan 23 14:23:13 crc kubenswrapper[4775]: I0123 14:23:13.382516 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-galera-0" event={"ID":"372c512d-5894-49da-ae1e-cb3e54aadacc","Type":"ContainerStarted","Data":"defb01cab8366ab12bcf75b7962f1f8034a00742f825a1d91bef8750e90b2297"} Jan 23 14:23:13 crc kubenswrapper[4775]: I0123 14:23:13.384653 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-cell1-galera-0" event={"ID":"481cbe1b-2796-4ad2-a342-3661afa62383","Type":"ContainerStarted","Data":"31b5cf8ee4ede56b06f3da3e76cbc8fcf83ffea6473bc7a9161cc9c2317450ba"} Jan 23 14:23:17 crc kubenswrapper[4775]: I0123 14:23:17.420289 4775 generic.go:334] "Generic (PLEG): container finished" podID="481cbe1b-2796-4ad2-a342-3661afa62383" containerID="31b5cf8ee4ede56b06f3da3e76cbc8fcf83ffea6473bc7a9161cc9c2317450ba" exitCode=0 Jan 23 14:23:17 crc kubenswrapper[4775]: I0123 14:23:17.420382 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-cell1-galera-0" event={"ID":"481cbe1b-2796-4ad2-a342-3661afa62383","Type":"ContainerDied","Data":"31b5cf8ee4ede56b06f3da3e76cbc8fcf83ffea6473bc7a9161cc9c2317450ba"} Jan 23 14:23:17 crc kubenswrapper[4775]: I0123 14:23:17.424463 4775 generic.go:334] "Generic (PLEG): container finished" podID="372c512d-5894-49da-ae1e-cb3e54aadacc" containerID="defb01cab8366ab12bcf75b7962f1f8034a00742f825a1d91bef8750e90b2297" exitCode=0 Jan 23 14:23:17 crc kubenswrapper[4775]: I0123 14:23:17.424508 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-galera-0" event={"ID":"372c512d-5894-49da-ae1e-cb3e54aadacc","Type":"ContainerDied","Data":"defb01cab8366ab12bcf75b7962f1f8034a00742f825a1d91bef8750e90b2297"} Jan 23 14:23:18 crc kubenswrapper[4775]: I0123 14:23:18.438974 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-galera-0" event={"ID":"372c512d-5894-49da-ae1e-cb3e54aadacc","Type":"ContainerStarted","Data":"e58c53ea0950bc5520e5855a7d3316040fecf0f185fec6cf47fa56ff5be619e0"} Jan 23 14:23:18 crc kubenswrapper[4775]: I0123 14:23:18.443435 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstack-cell1-galera-0" event={"ID":"481cbe1b-2796-4ad2-a342-3661afa62383","Type":"ContainerStarted","Data":"2701bae5178de9ed981cd188c27d1927ae6bbb38cd686c28b1cf6af8a68e9d4f"} Jan 23 14:23:18 crc kubenswrapper[4775]: I0123 14:23:18.481616 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/openstack-galera-0" podStartSLOduration=7.915844805 podStartE2EDuration="36.481592331s" podCreationTimestamp="2026-01-23 14:22:42 +0000 UTC" firstStartedPulling="2026-01-23 14:22:43.92132343 +0000 UTC m=+1110.916152170" lastFinishedPulling="2026-01-23 14:23:12.487070916 +0000 UTC m=+1139.481899696" observedRunningTime="2026-01-23 14:23:18.471952962 +0000 UTC m=+1145.466781772" watchObservedRunningTime="2026-01-23 14:23:18.481592331 +0000 UTC m=+1145.476421101" Jan 23 14:23:18 crc kubenswrapper[4775]: I0123 14:23:18.501229 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/openstack-cell1-galera-0" podStartSLOduration=8.467581303 podStartE2EDuration="35.501203969s" podCreationTimestamp="2026-01-23 14:22:43 +0000 UTC" firstStartedPulling="2026-01-23 14:22:45.455007995 +0000 UTC m=+1112.449836775" lastFinishedPulling="2026-01-23 14:23:12.488630671 +0000 UTC m=+1139.483459441" observedRunningTime="2026-01-23 14:23:18.493717412 +0000 UTC m=+1145.488546182" watchObservedRunningTime="2026-01-23 14:23:18.501203969 +0000 UTC m=+1145.496032749" Jan 23 14:23:23 crc kubenswrapper[4775]: I0123 14:23:23.219484 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:23:23 crc kubenswrapper[4775]: I0123 14:23:23.221507 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:23:23 crc kubenswrapper[4775]: I0123 14:23:23.221673 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:23:23 crc kubenswrapper[4775]: I0123 14:23:23.222399 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"04aeabd8c4a1cb3e5fe85b5d65d741e8a1d8f8a6f9824c7a0b310cfc24829df1"} pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 14:23:23 crc kubenswrapper[4775]: I0123 14:23:23.222567 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" containerID="cri-o://04aeabd8c4a1cb3e5fe85b5d65d741e8a1d8f8a6f9824c7a0b310cfc24829df1" gracePeriod=600 Jan 23 14:23:23 crc kubenswrapper[4775]: I0123 14:23:23.423862 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:23:23 crc kubenswrapper[4775]: I0123 14:23:23.424331 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:23:23 crc kubenswrapper[4775]: I0123 14:23:23.497647 4775 generic.go:334] "Generic (PLEG): container finished" podID="4fea0767-0566-4214-855d-ed0373946271" containerID="04aeabd8c4a1cb3e5fe85b5d65d741e8a1d8f8a6f9824c7a0b310cfc24829df1" exitCode=0 Jan 23 14:23:23 crc kubenswrapper[4775]: I0123 14:23:23.497691 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerDied","Data":"04aeabd8c4a1cb3e5fe85b5d65d741e8a1d8f8a6f9824c7a0b310cfc24829df1"} Jan 23 14:23:23 crc kubenswrapper[4775]: I0123 14:23:23.497725 4775 scope.go:117] "RemoveContainer" containerID="fa8fa956c376098d850acaf12f40cfec6f35655328fae4e2ad440d4fb20e4881" Jan 23 14:23:23 crc kubenswrapper[4775]: I0123 14:23:23.547700 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:23:23 crc kubenswrapper[4775]: I0123 14:23:23.645146 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/openstack-galera-0" Jan 23 14:23:24 crc kubenswrapper[4775]: I0123 14:23:24.509025 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerStarted","Data":"a5634c941e351401aed478dd8e700e6d7b7de6241fab2a08ba60719db5eab596"} Jan 23 14:23:24 crc kubenswrapper[4775]: I0123 14:23:24.855291 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:23:24 crc kubenswrapper[4775]: I0123 14:23:24.855363 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:23:24 crc kubenswrapper[4775]: I0123 14:23:24.950601 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:23:25 crc kubenswrapper[4775]: I0123 14:23:25.683616 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/openstack-cell1-galera-0" Jan 23 14:23:32 crc kubenswrapper[4775]: I0123 14:23:32.185368 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/root-account-create-update-czhxs"] Jan 23 14:23:32 crc kubenswrapper[4775]: I0123 14:23:32.187434 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-czhxs" Jan 23 14:23:32 crc kubenswrapper[4775]: I0123 14:23:32.190625 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"openstack-mariadb-root-db-secret" Jan 23 14:23:32 crc kubenswrapper[4775]: I0123 14:23:32.248356 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/root-account-create-update-czhxs"] Jan 23 14:23:32 crc kubenswrapper[4775]: I0123 14:23:32.295069 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dfv4\" (UniqueName: \"kubernetes.io/projected/147f8416-94ee-4e77-bda1-ad3a06658335-kube-api-access-6dfv4\") pod \"root-account-create-update-czhxs\" (UID: \"147f8416-94ee-4e77-bda1-ad3a06658335\") " pod="nova-kuttl-default/root-account-create-update-czhxs" Jan 23 14:23:32 crc kubenswrapper[4775]: I0123 14:23:32.295197 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/147f8416-94ee-4e77-bda1-ad3a06658335-operator-scripts\") pod \"root-account-create-update-czhxs\" (UID: \"147f8416-94ee-4e77-bda1-ad3a06658335\") " pod="nova-kuttl-default/root-account-create-update-czhxs" Jan 23 14:23:32 crc kubenswrapper[4775]: I0123 14:23:32.397349 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dfv4\" (UniqueName: \"kubernetes.io/projected/147f8416-94ee-4e77-bda1-ad3a06658335-kube-api-access-6dfv4\") pod \"root-account-create-update-czhxs\" (UID: \"147f8416-94ee-4e77-bda1-ad3a06658335\") " pod="nova-kuttl-default/root-account-create-update-czhxs" Jan 23 14:23:32 crc kubenswrapper[4775]: I0123 14:23:32.397475 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/147f8416-94ee-4e77-bda1-ad3a06658335-operator-scripts\") pod \"root-account-create-update-czhxs\" (UID: \"147f8416-94ee-4e77-bda1-ad3a06658335\") " pod="nova-kuttl-default/root-account-create-update-czhxs" Jan 23 14:23:32 crc kubenswrapper[4775]: I0123 14:23:32.399124 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/147f8416-94ee-4e77-bda1-ad3a06658335-operator-scripts\") pod \"root-account-create-update-czhxs\" (UID: \"147f8416-94ee-4e77-bda1-ad3a06658335\") " pod="nova-kuttl-default/root-account-create-update-czhxs" Jan 23 14:23:32 crc kubenswrapper[4775]: I0123 14:23:32.436237 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dfv4\" (UniqueName: \"kubernetes.io/projected/147f8416-94ee-4e77-bda1-ad3a06658335-kube-api-access-6dfv4\") pod \"root-account-create-update-czhxs\" (UID: \"147f8416-94ee-4e77-bda1-ad3a06658335\") " pod="nova-kuttl-default/root-account-create-update-czhxs" Jan 23 14:23:32 crc kubenswrapper[4775]: I0123 14:23:32.559705 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-czhxs" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.080573 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/root-account-create-update-czhxs"] Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.337372 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-db-create-8k7zh"] Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.338864 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-create-8k7zh" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.350871 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-db-create-8k7zh"] Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.414574 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da477c0f-52c9-4e94-894f-d953e46afd95-operator-scripts\") pod \"keystone-db-create-8k7zh\" (UID: \"da477c0f-52c9-4e94-894f-d953e46afd95\") " pod="nova-kuttl-default/keystone-db-create-8k7zh" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.414690 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94j92\" (UniqueName: \"kubernetes.io/projected/da477c0f-52c9-4e94-894f-d953e46afd95-kube-api-access-94j92\") pod \"keystone-db-create-8k7zh\" (UID: \"da477c0f-52c9-4e94-894f-d953e46afd95\") " pod="nova-kuttl-default/keystone-db-create-8k7zh" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.434782 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-72a2-account-create-update-4q5xn"] Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.435910 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-72a2-account-create-update-4q5xn" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.439896 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-db-secret" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.449781 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-72a2-account-create-update-4q5xn"] Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.515220 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da477c0f-52c9-4e94-894f-d953e46afd95-operator-scripts\") pod \"keystone-db-create-8k7zh\" (UID: \"da477c0f-52c9-4e94-894f-d953e46afd95\") " pod="nova-kuttl-default/keystone-db-create-8k7zh" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.515297 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a5345f7-7dc8-4e09-8566-ee1dbb897cce-operator-scripts\") pod \"keystone-72a2-account-create-update-4q5xn\" (UID: \"7a5345f7-7dc8-4e09-8566-ee1dbb897cce\") " pod="nova-kuttl-default/keystone-72a2-account-create-update-4q5xn" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.515327 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptpbw\" (UniqueName: \"kubernetes.io/projected/7a5345f7-7dc8-4e09-8566-ee1dbb897cce-kube-api-access-ptpbw\") pod \"keystone-72a2-account-create-update-4q5xn\" (UID: \"7a5345f7-7dc8-4e09-8566-ee1dbb897cce\") " pod="nova-kuttl-default/keystone-72a2-account-create-update-4q5xn" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.515377 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94j92\" (UniqueName: \"kubernetes.io/projected/da477c0f-52c9-4e94-894f-d953e46afd95-kube-api-access-94j92\") pod \"keystone-db-create-8k7zh\" (UID: \"da477c0f-52c9-4e94-894f-d953e46afd95\") " pod="nova-kuttl-default/keystone-db-create-8k7zh" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.516140 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da477c0f-52c9-4e94-894f-d953e46afd95-operator-scripts\") pod \"keystone-db-create-8k7zh\" (UID: \"da477c0f-52c9-4e94-894f-d953e46afd95\") " pod="nova-kuttl-default/keystone-db-create-8k7zh" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.542540 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94j92\" (UniqueName: \"kubernetes.io/projected/da477c0f-52c9-4e94-894f-d953e46afd95-kube-api-access-94j92\") pod \"keystone-db-create-8k7zh\" (UID: \"da477c0f-52c9-4e94-894f-d953e46afd95\") " pod="nova-kuttl-default/keystone-db-create-8k7zh" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.616541 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a5345f7-7dc8-4e09-8566-ee1dbb897cce-operator-scripts\") pod \"keystone-72a2-account-create-update-4q5xn\" (UID: \"7a5345f7-7dc8-4e09-8566-ee1dbb897cce\") " pod="nova-kuttl-default/keystone-72a2-account-create-update-4q5xn" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.616600 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptpbw\" (UniqueName: \"kubernetes.io/projected/7a5345f7-7dc8-4e09-8566-ee1dbb897cce-kube-api-access-ptpbw\") pod \"keystone-72a2-account-create-update-4q5xn\" (UID: \"7a5345f7-7dc8-4e09-8566-ee1dbb897cce\") " pod="nova-kuttl-default/keystone-72a2-account-create-update-4q5xn" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.619086 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a5345f7-7dc8-4e09-8566-ee1dbb897cce-operator-scripts\") pod \"keystone-72a2-account-create-update-4q5xn\" (UID: \"7a5345f7-7dc8-4e09-8566-ee1dbb897cce\") " pod="nova-kuttl-default/keystone-72a2-account-create-update-4q5xn" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.632682 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/placement-db-create-qn6k5"] Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.633871 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-create-qn6k5" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.644345 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptpbw\" (UniqueName: \"kubernetes.io/projected/7a5345f7-7dc8-4e09-8566-ee1dbb897cce-kube-api-access-ptpbw\") pod \"keystone-72a2-account-create-update-4q5xn\" (UID: \"7a5345f7-7dc8-4e09-8566-ee1dbb897cce\") " pod="nova-kuttl-default/keystone-72a2-account-create-update-4q5xn" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.657734 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-db-create-qn6k5"] Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.659493 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-czhxs" event={"ID":"147f8416-94ee-4e77-bda1-ad3a06658335","Type":"ContainerStarted","Data":"405af6d0ad574516571eab38c8f59961044ec46ba3fe4637f7db48cea3e9b24f"} Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.683850 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-create-8k7zh" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.717226 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7a04db9-60c9-4bce-8100-18a4134d0c86-operator-scripts\") pod \"placement-db-create-qn6k5\" (UID: \"c7a04db9-60c9-4bce-8100-18a4134d0c86\") " pod="nova-kuttl-default/placement-db-create-qn6k5" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.717272 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f62n\" (UniqueName: \"kubernetes.io/projected/c7a04db9-60c9-4bce-8100-18a4134d0c86-kube-api-access-9f62n\") pod \"placement-db-create-qn6k5\" (UID: \"c7a04db9-60c9-4bce-8100-18a4134d0c86\") " pod="nova-kuttl-default/placement-db-create-qn6k5" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.749519 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/placement-fb53-account-create-update-mth7w"] Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.751993 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-fb53-account-create-update-mth7w" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.752559 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-fb53-account-create-update-mth7w"] Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.763207 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-db-secret" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.763655 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-72a2-account-create-update-4q5xn" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.819185 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzbz4\" (UniqueName: \"kubernetes.io/projected/2887a864-f392-4887-8b38-bde90ef8f18d-kube-api-access-xzbz4\") pod \"placement-fb53-account-create-update-mth7w\" (UID: \"2887a864-f392-4887-8b38-bde90ef8f18d\") " pod="nova-kuttl-default/placement-fb53-account-create-update-mth7w" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.819704 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7a04db9-60c9-4bce-8100-18a4134d0c86-operator-scripts\") pod \"placement-db-create-qn6k5\" (UID: \"c7a04db9-60c9-4bce-8100-18a4134d0c86\") " pod="nova-kuttl-default/placement-db-create-qn6k5" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.819751 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f62n\" (UniqueName: \"kubernetes.io/projected/c7a04db9-60c9-4bce-8100-18a4134d0c86-kube-api-access-9f62n\") pod \"placement-db-create-qn6k5\" (UID: \"c7a04db9-60c9-4bce-8100-18a4134d0c86\") " pod="nova-kuttl-default/placement-db-create-qn6k5" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.819872 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2887a864-f392-4887-8b38-bde90ef8f18d-operator-scripts\") pod \"placement-fb53-account-create-update-mth7w\" (UID: \"2887a864-f392-4887-8b38-bde90ef8f18d\") " pod="nova-kuttl-default/placement-fb53-account-create-update-mth7w" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.821315 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7a04db9-60c9-4bce-8100-18a4134d0c86-operator-scripts\") pod \"placement-db-create-qn6k5\" (UID: \"c7a04db9-60c9-4bce-8100-18a4134d0c86\") " pod="nova-kuttl-default/placement-db-create-qn6k5" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.841613 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f62n\" (UniqueName: \"kubernetes.io/projected/c7a04db9-60c9-4bce-8100-18a4134d0c86-kube-api-access-9f62n\") pod \"placement-db-create-qn6k5\" (UID: \"c7a04db9-60c9-4bce-8100-18a4134d0c86\") " pod="nova-kuttl-default/placement-db-create-qn6k5" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.920953 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzbz4\" (UniqueName: \"kubernetes.io/projected/2887a864-f392-4887-8b38-bde90ef8f18d-kube-api-access-xzbz4\") pod \"placement-fb53-account-create-update-mth7w\" (UID: \"2887a864-f392-4887-8b38-bde90ef8f18d\") " pod="nova-kuttl-default/placement-fb53-account-create-update-mth7w" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.921073 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2887a864-f392-4887-8b38-bde90ef8f18d-operator-scripts\") pod \"placement-fb53-account-create-update-mth7w\" (UID: \"2887a864-f392-4887-8b38-bde90ef8f18d\") " pod="nova-kuttl-default/placement-fb53-account-create-update-mth7w" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.921674 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2887a864-f392-4887-8b38-bde90ef8f18d-operator-scripts\") pod \"placement-fb53-account-create-update-mth7w\" (UID: \"2887a864-f392-4887-8b38-bde90ef8f18d\") " pod="nova-kuttl-default/placement-fb53-account-create-update-mth7w" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.946690 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzbz4\" (UniqueName: \"kubernetes.io/projected/2887a864-f392-4887-8b38-bde90ef8f18d-kube-api-access-xzbz4\") pod \"placement-fb53-account-create-update-mth7w\" (UID: \"2887a864-f392-4887-8b38-bde90ef8f18d\") " pod="nova-kuttl-default/placement-fb53-account-create-update-mth7w" Jan 23 14:23:33 crc kubenswrapper[4775]: I0123 14:23:33.993403 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-create-qn6k5" Jan 23 14:23:34 crc kubenswrapper[4775]: I0123 14:23:34.050734 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-72a2-account-create-update-4q5xn"] Jan 23 14:23:34 crc kubenswrapper[4775]: I0123 14:23:34.131837 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-fb53-account-create-update-mth7w" Jan 23 14:23:34 crc kubenswrapper[4775]: I0123 14:23:34.136192 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-db-create-8k7zh"] Jan 23 14:23:34 crc kubenswrapper[4775]: W0123 14:23:34.159630 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda477c0f_52c9_4e94_894f_d953e46afd95.slice/crio-79d037d5c3cf290495e933ccff8ef1742c1454d3fce36960b9f307c4f3e5cd83 WatchSource:0}: Error finding container 79d037d5c3cf290495e933ccff8ef1742c1454d3fce36960b9f307c4f3e5cd83: Status 404 returned error can't find the container with id 79d037d5c3cf290495e933ccff8ef1742c1454d3fce36960b9f307c4f3e5cd83 Jan 23 14:23:34 crc kubenswrapper[4775]: I0123 14:23:34.374021 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-fb53-account-create-update-mth7w"] Jan 23 14:23:34 crc kubenswrapper[4775]: W0123 14:23:34.380507 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2887a864_f392_4887_8b38_bde90ef8f18d.slice/crio-02596b4b1c706019a8d6fa34ecd0dfb24ae934ff81f9a5a8def7d538d78d1485 WatchSource:0}: Error finding container 02596b4b1c706019a8d6fa34ecd0dfb24ae934ff81f9a5a8def7d538d78d1485: Status 404 returned error can't find the container with id 02596b4b1c706019a8d6fa34ecd0dfb24ae934ff81f9a5a8def7d538d78d1485 Jan 23 14:23:34 crc kubenswrapper[4775]: I0123 14:23:34.538771 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-db-create-qn6k5"] Jan 23 14:23:34 crc kubenswrapper[4775]: I0123 14:23:34.668018 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-create-8k7zh" event={"ID":"da477c0f-52c9-4e94-894f-d953e46afd95","Type":"ContainerStarted","Data":"4198c894ee5e56e286b0cbfe28fec2b93833db9cb46297fad57dce94d57cabf9"} Jan 23 14:23:34 crc kubenswrapper[4775]: I0123 14:23:34.668943 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-create-8k7zh" event={"ID":"da477c0f-52c9-4e94-894f-d953e46afd95","Type":"ContainerStarted","Data":"79d037d5c3cf290495e933ccff8ef1742c1454d3fce36960b9f307c4f3e5cd83"} Jan 23 14:23:34 crc kubenswrapper[4775]: I0123 14:23:34.669060 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-create-qn6k5" event={"ID":"c7a04db9-60c9-4bce-8100-18a4134d0c86","Type":"ContainerStarted","Data":"6ae01278c94162e3f61e3a0dd642725fbd3ab9566bad996ff5a2aac0b55f4ff8"} Jan 23 14:23:34 crc kubenswrapper[4775]: I0123 14:23:34.672504 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-czhxs" event={"ID":"147f8416-94ee-4e77-bda1-ad3a06658335","Type":"ContainerStarted","Data":"a13f8eef0e3c756f922ffa047c8687839a95c0c6de399f124374a283f7dcaa06"} Jan 23 14:23:34 crc kubenswrapper[4775]: I0123 14:23:34.673896 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-72a2-account-create-update-4q5xn" event={"ID":"7a5345f7-7dc8-4e09-8566-ee1dbb897cce","Type":"ContainerStarted","Data":"45eb281a90784378326e137fb73e4ed8e5e8582744a86eeaf4ee707b7c73c128"} Jan 23 14:23:34 crc kubenswrapper[4775]: I0123 14:23:34.673932 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-72a2-account-create-update-4q5xn" event={"ID":"7a5345f7-7dc8-4e09-8566-ee1dbb897cce","Type":"ContainerStarted","Data":"236583e0639aab4177c92e4624c67f9bb19bceffe2c766fbdeb59a8d591503f1"} Jan 23 14:23:34 crc kubenswrapper[4775]: I0123 14:23:34.675863 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-fb53-account-create-update-mth7w" event={"ID":"2887a864-f392-4887-8b38-bde90ef8f18d","Type":"ContainerStarted","Data":"02596b4b1c706019a8d6fa34ecd0dfb24ae934ff81f9a5a8def7d538d78d1485"} Jan 23 14:23:34 crc kubenswrapper[4775]: I0123 14:23:34.694665 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/root-account-create-update-czhxs" podStartSLOduration=2.694640706 podStartE2EDuration="2.694640706s" podCreationTimestamp="2026-01-23 14:23:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:23:34.686090508 +0000 UTC m=+1161.680919248" watchObservedRunningTime="2026-01-23 14:23:34.694640706 +0000 UTC m=+1161.689469446" Jan 23 14:23:34 crc kubenswrapper[4775]: I0123 14:23:34.725627 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-72a2-account-create-update-4q5xn" podStartSLOduration=1.725603362 podStartE2EDuration="1.725603362s" podCreationTimestamp="2026-01-23 14:23:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:23:34.703290126 +0000 UTC m=+1161.698118886" watchObservedRunningTime="2026-01-23 14:23:34.725603362 +0000 UTC m=+1161.720432122" Jan 23 14:23:35 crc kubenswrapper[4775]: I0123 14:23:35.685601 4775 generic.go:334] "Generic (PLEG): container finished" podID="da477c0f-52c9-4e94-894f-d953e46afd95" containerID="4198c894ee5e56e286b0cbfe28fec2b93833db9cb46297fad57dce94d57cabf9" exitCode=0 Jan 23 14:23:35 crc kubenswrapper[4775]: I0123 14:23:35.685702 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-create-8k7zh" event={"ID":"da477c0f-52c9-4e94-894f-d953e46afd95","Type":"ContainerDied","Data":"4198c894ee5e56e286b0cbfe28fec2b93833db9cb46297fad57dce94d57cabf9"} Jan 23 14:23:35 crc kubenswrapper[4775]: I0123 14:23:35.688288 4775 generic.go:334] "Generic (PLEG): container finished" podID="c7a04db9-60c9-4bce-8100-18a4134d0c86" containerID="750eb99745aee2f0e8dca16ba12e68de151eeb1758e4a96888cb2f880483b793" exitCode=0 Jan 23 14:23:35 crc kubenswrapper[4775]: I0123 14:23:35.688431 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-create-qn6k5" event={"ID":"c7a04db9-60c9-4bce-8100-18a4134d0c86","Type":"ContainerDied","Data":"750eb99745aee2f0e8dca16ba12e68de151eeb1758e4a96888cb2f880483b793"} Jan 23 14:23:35 crc kubenswrapper[4775]: I0123 14:23:35.690709 4775 generic.go:334] "Generic (PLEG): container finished" podID="147f8416-94ee-4e77-bda1-ad3a06658335" containerID="a13f8eef0e3c756f922ffa047c8687839a95c0c6de399f124374a283f7dcaa06" exitCode=0 Jan 23 14:23:35 crc kubenswrapper[4775]: I0123 14:23:35.690811 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-czhxs" event={"ID":"147f8416-94ee-4e77-bda1-ad3a06658335","Type":"ContainerDied","Data":"a13f8eef0e3c756f922ffa047c8687839a95c0c6de399f124374a283f7dcaa06"} Jan 23 14:23:35 crc kubenswrapper[4775]: I0123 14:23:35.692779 4775 generic.go:334] "Generic (PLEG): container finished" podID="7a5345f7-7dc8-4e09-8566-ee1dbb897cce" containerID="45eb281a90784378326e137fb73e4ed8e5e8582744a86eeaf4ee707b7c73c128" exitCode=0 Jan 23 14:23:35 crc kubenswrapper[4775]: I0123 14:23:35.692904 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-72a2-account-create-update-4q5xn" event={"ID":"7a5345f7-7dc8-4e09-8566-ee1dbb897cce","Type":"ContainerDied","Data":"45eb281a90784378326e137fb73e4ed8e5e8582744a86eeaf4ee707b7c73c128"} Jan 23 14:23:35 crc kubenswrapper[4775]: I0123 14:23:35.695030 4775 generic.go:334] "Generic (PLEG): container finished" podID="2887a864-f392-4887-8b38-bde90ef8f18d" containerID="a2f2a732f030cd4d4d5df85398503f60726ce73a20188125433f4f1e1c54a86f" exitCode=0 Jan 23 14:23:35 crc kubenswrapper[4775]: I0123 14:23:35.695083 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-fb53-account-create-update-mth7w" event={"ID":"2887a864-f392-4887-8b38-bde90ef8f18d","Type":"ContainerDied","Data":"a2f2a732f030cd4d4d5df85398503f60726ce73a20188125433f4f1e1c54a86f"} Jan 23 14:23:35 crc kubenswrapper[4775]: I0123 14:23:35.697074 4775 generic.go:334] "Generic (PLEG): container finished" podID="401a94b6-0628-4cea-b62a-c3229a913d16" containerID="951381f236bf6d19ffe6bb0736f765d793795cae85a27156e7dda6bfa98ec1bb" exitCode=0 Jan 23 14:23:35 crc kubenswrapper[4775]: I0123 14:23:35.697119 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" event={"ID":"401a94b6-0628-4cea-b62a-c3229a913d16","Type":"ContainerDied","Data":"951381f236bf6d19ffe6bb0736f765d793795cae85a27156e7dda6bfa98ec1bb"} Jan 23 14:23:36 crc kubenswrapper[4775]: I0123 14:23:36.707086 4775 generic.go:334] "Generic (PLEG): container finished" podID="70288c27-7f95-4843-a8fb-f2ac58ea8e1f" containerID="2eefbe509e8194211bd62dec0bf3bff4e146f4a7f14ae1ac4ad0df7edbe56abb" exitCode=0 Jan 23 14:23:36 crc kubenswrapper[4775]: I0123 14:23:36.707166 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-server-0" event={"ID":"70288c27-7f95-4843-a8fb-f2ac58ea8e1f","Type":"ContainerDied","Data":"2eefbe509e8194211bd62dec0bf3bff4e146f4a7f14ae1ac4ad0df7edbe56abb"} Jan 23 14:23:36 crc kubenswrapper[4775]: I0123 14:23:36.710062 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" event={"ID":"401a94b6-0628-4cea-b62a-c3229a913d16","Type":"ContainerStarted","Data":"9b88c2031cdf95c8db38b2e10aae786e8def8ffce67f1f2bf5cc8f4b11ad1afb"} Jan 23 14:23:36 crc kubenswrapper[4775]: I0123 14:23:36.710450 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:23:36 crc kubenswrapper[4775]: I0123 14:23:36.711936 4775 generic.go:334] "Generic (PLEG): container finished" podID="4b05c189-a694-4cbc-b679-a974e6bf99bc" containerID="67185d02961f666b77208d2d95b7f2da17886cf0f7543d1ab63c0d9e0e7ad316" exitCode=0 Jan 23 14:23:36 crc kubenswrapper[4775]: I0123 14:23:36.712067 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-cell1-server-0" event={"ID":"4b05c189-a694-4cbc-b679-a974e6bf99bc","Type":"ContainerDied","Data":"67185d02961f666b77208d2d95b7f2da17886cf0f7543d1ab63c0d9e0e7ad316"} Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.086853 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-72a2-account-create-update-4q5xn" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.105135 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" podStartSLOduration=38.980406865 podStartE2EDuration="56.105113741s" podCreationTimestamp="2026-01-23 14:22:41 +0000 UTC" firstStartedPulling="2026-01-23 14:22:43.775811106 +0000 UTC m=+1110.770639846" lastFinishedPulling="2026-01-23 14:23:00.900517942 +0000 UTC m=+1127.895346722" observedRunningTime="2026-01-23 14:23:36.804547116 +0000 UTC m=+1163.799375896" watchObservedRunningTime="2026-01-23 14:23:37.105113741 +0000 UTC m=+1164.099942491" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.111858 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-create-8k7zh" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.124923 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-create-qn6k5" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.134785 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-fb53-account-create-update-mth7w" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.148937 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-czhxs" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.181642 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzbz4\" (UniqueName: \"kubernetes.io/projected/2887a864-f392-4887-8b38-bde90ef8f18d-kube-api-access-xzbz4\") pod \"2887a864-f392-4887-8b38-bde90ef8f18d\" (UID: \"2887a864-f392-4887-8b38-bde90ef8f18d\") " Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.182021 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a5345f7-7dc8-4e09-8566-ee1dbb897cce-operator-scripts\") pod \"7a5345f7-7dc8-4e09-8566-ee1dbb897cce\" (UID: \"7a5345f7-7dc8-4e09-8566-ee1dbb897cce\") " Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.182162 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94j92\" (UniqueName: \"kubernetes.io/projected/da477c0f-52c9-4e94-894f-d953e46afd95-kube-api-access-94j92\") pod \"da477c0f-52c9-4e94-894f-d953e46afd95\" (UID: \"da477c0f-52c9-4e94-894f-d953e46afd95\") " Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.182263 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7a04db9-60c9-4bce-8100-18a4134d0c86-operator-scripts\") pod \"c7a04db9-60c9-4bce-8100-18a4134d0c86\" (UID: \"c7a04db9-60c9-4bce-8100-18a4134d0c86\") " Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.182381 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da477c0f-52c9-4e94-894f-d953e46afd95-operator-scripts\") pod \"da477c0f-52c9-4e94-894f-d953e46afd95\" (UID: \"da477c0f-52c9-4e94-894f-d953e46afd95\") " Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.182482 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a5345f7-7dc8-4e09-8566-ee1dbb897cce-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7a5345f7-7dc8-4e09-8566-ee1dbb897cce" (UID: "7a5345f7-7dc8-4e09-8566-ee1dbb897cce"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.182613 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9f62n\" (UniqueName: \"kubernetes.io/projected/c7a04db9-60c9-4bce-8100-18a4134d0c86-kube-api-access-9f62n\") pod \"c7a04db9-60c9-4bce-8100-18a4134d0c86\" (UID: \"c7a04db9-60c9-4bce-8100-18a4134d0c86\") " Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.182672 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7a04db9-60c9-4bce-8100-18a4134d0c86-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c7a04db9-60c9-4bce-8100-18a4134d0c86" (UID: "c7a04db9-60c9-4bce-8100-18a4134d0c86"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.182752 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptpbw\" (UniqueName: \"kubernetes.io/projected/7a5345f7-7dc8-4e09-8566-ee1dbb897cce-kube-api-access-ptpbw\") pod \"7a5345f7-7dc8-4e09-8566-ee1dbb897cce\" (UID: \"7a5345f7-7dc8-4e09-8566-ee1dbb897cce\") " Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.182780 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da477c0f-52c9-4e94-894f-d953e46afd95-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "da477c0f-52c9-4e94-894f-d953e46afd95" (UID: "da477c0f-52c9-4e94-894f-d953e46afd95"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.182882 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2887a864-f392-4887-8b38-bde90ef8f18d-operator-scripts\") pod \"2887a864-f392-4887-8b38-bde90ef8f18d\" (UID: \"2887a864-f392-4887-8b38-bde90ef8f18d\") " Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.183401 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2887a864-f392-4887-8b38-bde90ef8f18d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2887a864-f392-4887-8b38-bde90ef8f18d" (UID: "2887a864-f392-4887-8b38-bde90ef8f18d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.183725 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c7a04db9-60c9-4bce-8100-18a4134d0c86-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.183753 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da477c0f-52c9-4e94-894f-d953e46afd95-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.183765 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2887a864-f392-4887-8b38-bde90ef8f18d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.183778 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7a5345f7-7dc8-4e09-8566-ee1dbb897cce-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.186267 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7a04db9-60c9-4bce-8100-18a4134d0c86-kube-api-access-9f62n" (OuterVolumeSpecName: "kube-api-access-9f62n") pod "c7a04db9-60c9-4bce-8100-18a4134d0c86" (UID: "c7a04db9-60c9-4bce-8100-18a4134d0c86"). InnerVolumeSpecName "kube-api-access-9f62n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.186359 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a5345f7-7dc8-4e09-8566-ee1dbb897cce-kube-api-access-ptpbw" (OuterVolumeSpecName: "kube-api-access-ptpbw") pod "7a5345f7-7dc8-4e09-8566-ee1dbb897cce" (UID: "7a5345f7-7dc8-4e09-8566-ee1dbb897cce"). InnerVolumeSpecName "kube-api-access-ptpbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.186510 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2887a864-f392-4887-8b38-bde90ef8f18d-kube-api-access-xzbz4" (OuterVolumeSpecName: "kube-api-access-xzbz4") pod "2887a864-f392-4887-8b38-bde90ef8f18d" (UID: "2887a864-f392-4887-8b38-bde90ef8f18d"). InnerVolumeSpecName "kube-api-access-xzbz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.186979 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da477c0f-52c9-4e94-894f-d953e46afd95-kube-api-access-94j92" (OuterVolumeSpecName: "kube-api-access-94j92") pod "da477c0f-52c9-4e94-894f-d953e46afd95" (UID: "da477c0f-52c9-4e94-894f-d953e46afd95"). InnerVolumeSpecName "kube-api-access-94j92". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.285058 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dfv4\" (UniqueName: \"kubernetes.io/projected/147f8416-94ee-4e77-bda1-ad3a06658335-kube-api-access-6dfv4\") pod \"147f8416-94ee-4e77-bda1-ad3a06658335\" (UID: \"147f8416-94ee-4e77-bda1-ad3a06658335\") " Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.285103 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/147f8416-94ee-4e77-bda1-ad3a06658335-operator-scripts\") pod \"147f8416-94ee-4e77-bda1-ad3a06658335\" (UID: \"147f8416-94ee-4e77-bda1-ad3a06658335\") " Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.285397 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzbz4\" (UniqueName: \"kubernetes.io/projected/2887a864-f392-4887-8b38-bde90ef8f18d-kube-api-access-xzbz4\") on node \"crc\" DevicePath \"\"" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.285415 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94j92\" (UniqueName: \"kubernetes.io/projected/da477c0f-52c9-4e94-894f-d953e46afd95-kube-api-access-94j92\") on node \"crc\" DevicePath \"\"" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.285425 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9f62n\" (UniqueName: \"kubernetes.io/projected/c7a04db9-60c9-4bce-8100-18a4134d0c86-kube-api-access-9f62n\") on node \"crc\" DevicePath \"\"" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.285434 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptpbw\" (UniqueName: \"kubernetes.io/projected/7a5345f7-7dc8-4e09-8566-ee1dbb897cce-kube-api-access-ptpbw\") on node \"crc\" DevicePath \"\"" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.285916 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/147f8416-94ee-4e77-bda1-ad3a06658335-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "147f8416-94ee-4e77-bda1-ad3a06658335" (UID: "147f8416-94ee-4e77-bda1-ad3a06658335"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.287905 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/147f8416-94ee-4e77-bda1-ad3a06658335-kube-api-access-6dfv4" (OuterVolumeSpecName: "kube-api-access-6dfv4") pod "147f8416-94ee-4e77-bda1-ad3a06658335" (UID: "147f8416-94ee-4e77-bda1-ad3a06658335"). InnerVolumeSpecName "kube-api-access-6dfv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.387474 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dfv4\" (UniqueName: \"kubernetes.io/projected/147f8416-94ee-4e77-bda1-ad3a06658335-kube-api-access-6dfv4\") on node \"crc\" DevicePath \"\"" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.387551 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/147f8416-94ee-4e77-bda1-ad3a06658335-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.723299 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-72a2-account-create-update-4q5xn" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.725709 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-fb53-account-create-update-mth7w" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.731682 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-72a2-account-create-update-4q5xn" event={"ID":"7a5345f7-7dc8-4e09-8566-ee1dbb897cce","Type":"ContainerDied","Data":"236583e0639aab4177c92e4624c67f9bb19bceffe2c766fbdeb59a8d591503f1"} Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.731743 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="236583e0639aab4177c92e4624c67f9bb19bceffe2c766fbdeb59a8d591503f1" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.731763 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-fb53-account-create-update-mth7w" event={"ID":"2887a864-f392-4887-8b38-bde90ef8f18d","Type":"ContainerDied","Data":"02596b4b1c706019a8d6fa34ecd0dfb24ae934ff81f9a5a8def7d538d78d1485"} Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.731786 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02596b4b1c706019a8d6fa34ecd0dfb24ae934ff81f9a5a8def7d538d78d1485" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.731808 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-server-0" event={"ID":"70288c27-7f95-4843-a8fb-f2ac58ea8e1f","Type":"ContainerStarted","Data":"1c3fabd85eddaecf2f2a4c36001f074199e14c03147bf42a4502d2dc2d54274a"} Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.740884 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/rabbitmq-cell1-server-0" event={"ID":"4b05c189-a694-4cbc-b679-a974e6bf99bc","Type":"ContainerStarted","Data":"09e16692a9886632fbbac4f6d7eb58e6a68df930b36cf38577c12306ef8533ee"} Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.745667 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-create-8k7zh" event={"ID":"da477c0f-52c9-4e94-894f-d953e46afd95","Type":"ContainerDied","Data":"79d037d5c3cf290495e933ccff8ef1742c1454d3fce36960b9f307c4f3e5cd83"} Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.745735 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79d037d5c3cf290495e933ccff8ef1742c1454d3fce36960b9f307c4f3e5cd83" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.746015 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-create-8k7zh" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.770648 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-create-qn6k5" event={"ID":"c7a04db9-60c9-4bce-8100-18a4134d0c86","Type":"ContainerDied","Data":"6ae01278c94162e3f61e3a0dd642725fbd3ab9566bad996ff5a2aac0b55f4ff8"} Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.770708 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ae01278c94162e3f61e3a0dd642725fbd3ab9566bad996ff5a2aac0b55f4ff8" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.770795 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-create-qn6k5" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.800409 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-czhxs" Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.800476 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-czhxs" event={"ID":"147f8416-94ee-4e77-bda1-ad3a06658335","Type":"ContainerDied","Data":"405af6d0ad574516571eab38c8f59961044ec46ba3fe4637f7db48cea3e9b24f"} Jan 23 14:23:37 crc kubenswrapper[4775]: I0123 14:23:37.800499 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="405af6d0ad574516571eab38c8f59961044ec46ba3fe4637f7db48cea3e9b24f" Jan 23 14:23:38 crc kubenswrapper[4775]: I0123 14:23:38.668080 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/root-account-create-update-czhxs"] Jan 23 14:23:38 crc kubenswrapper[4775]: I0123 14:23:38.674698 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/root-account-create-update-czhxs"] Jan 23 14:23:38 crc kubenswrapper[4775]: I0123 14:23:38.805897 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:23:38 crc kubenswrapper[4775]: I0123 14:23:38.831737 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/rabbitmq-cell1-server-0" podStartSLOduration=40.30453698 podStartE2EDuration="56.831715971s" podCreationTimestamp="2026-01-23 14:22:42 +0000 UTC" firstStartedPulling="2026-01-23 14:22:44.320003706 +0000 UTC m=+1111.314832446" lastFinishedPulling="2026-01-23 14:23:00.847182697 +0000 UTC m=+1127.842011437" observedRunningTime="2026-01-23 14:23:38.824370378 +0000 UTC m=+1165.819199128" watchObservedRunningTime="2026-01-23 14:23:38.831715971 +0000 UTC m=+1165.826544711" Jan 23 14:23:38 crc kubenswrapper[4775]: I0123 14:23:38.850710 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/rabbitmq-server-0" podStartSLOduration=-9223371979.004097 podStartE2EDuration="57.8506789s" podCreationTimestamp="2026-01-23 14:22:41 +0000 UTC" firstStartedPulling="2026-01-23 14:22:43.27522248 +0000 UTC m=+1110.270051210" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:23:38.84928625 +0000 UTC m=+1165.844115010" watchObservedRunningTime="2026-01-23 14:23:38.8506789 +0000 UTC m=+1165.845507640" Jan 23 14:23:39 crc kubenswrapper[4775]: I0123 14:23:39.722761 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="147f8416-94ee-4e77-bda1-ad3a06658335" path="/var/lib/kubelet/pods/147f8416-94ee-4e77-bda1-ad3a06658335/volumes" Jan 23 14:23:42 crc kubenswrapper[4775]: I0123 14:23:42.209201 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/root-account-create-update-4sr9v"] Jan 23 14:23:42 crc kubenswrapper[4775]: E0123 14:23:42.209978 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="147f8416-94ee-4e77-bda1-ad3a06658335" containerName="mariadb-account-create-update" Jan 23 14:23:42 crc kubenswrapper[4775]: I0123 14:23:42.209994 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="147f8416-94ee-4e77-bda1-ad3a06658335" containerName="mariadb-account-create-update" Jan 23 14:23:42 crc kubenswrapper[4775]: E0123 14:23:42.210013 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2887a864-f392-4887-8b38-bde90ef8f18d" containerName="mariadb-account-create-update" Jan 23 14:23:42 crc kubenswrapper[4775]: I0123 14:23:42.210021 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="2887a864-f392-4887-8b38-bde90ef8f18d" containerName="mariadb-account-create-update" Jan 23 14:23:42 crc kubenswrapper[4775]: E0123 14:23:42.210029 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da477c0f-52c9-4e94-894f-d953e46afd95" containerName="mariadb-database-create" Jan 23 14:23:42 crc kubenswrapper[4775]: I0123 14:23:42.210036 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="da477c0f-52c9-4e94-894f-d953e46afd95" containerName="mariadb-database-create" Jan 23 14:23:42 crc kubenswrapper[4775]: E0123 14:23:42.210059 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a5345f7-7dc8-4e09-8566-ee1dbb897cce" containerName="mariadb-account-create-update" Jan 23 14:23:42 crc kubenswrapper[4775]: I0123 14:23:42.210066 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a5345f7-7dc8-4e09-8566-ee1dbb897cce" containerName="mariadb-account-create-update" Jan 23 14:23:42 crc kubenswrapper[4775]: E0123 14:23:42.210079 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7a04db9-60c9-4bce-8100-18a4134d0c86" containerName="mariadb-database-create" Jan 23 14:23:42 crc kubenswrapper[4775]: I0123 14:23:42.210085 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7a04db9-60c9-4bce-8100-18a4134d0c86" containerName="mariadb-database-create" Jan 23 14:23:42 crc kubenswrapper[4775]: I0123 14:23:42.210250 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="147f8416-94ee-4e77-bda1-ad3a06658335" containerName="mariadb-account-create-update" Jan 23 14:23:42 crc kubenswrapper[4775]: I0123 14:23:42.210264 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a5345f7-7dc8-4e09-8566-ee1dbb897cce" containerName="mariadb-account-create-update" Jan 23 14:23:42 crc kubenswrapper[4775]: I0123 14:23:42.210286 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="2887a864-f392-4887-8b38-bde90ef8f18d" containerName="mariadb-account-create-update" Jan 23 14:23:42 crc kubenswrapper[4775]: I0123 14:23:42.210294 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7a04db9-60c9-4bce-8100-18a4134d0c86" containerName="mariadb-database-create" Jan 23 14:23:42 crc kubenswrapper[4775]: I0123 14:23:42.210306 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="da477c0f-52c9-4e94-894f-d953e46afd95" containerName="mariadb-database-create" Jan 23 14:23:42 crc kubenswrapper[4775]: I0123 14:23:42.210910 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-4sr9v" Jan 23 14:23:42 crc kubenswrapper[4775]: I0123 14:23:42.213294 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"openstack-mariadb-root-db-secret" Jan 23 14:23:42 crc kubenswrapper[4775]: I0123 14:23:42.242050 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/root-account-create-update-4sr9v"] Jan 23 14:23:42 crc kubenswrapper[4775]: I0123 14:23:42.366866 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/498646da-c28d-4b9c-b61b-cd3c3b59455d-operator-scripts\") pod \"root-account-create-update-4sr9v\" (UID: \"498646da-c28d-4b9c-b61b-cd3c3b59455d\") " pod="nova-kuttl-default/root-account-create-update-4sr9v" Jan 23 14:23:42 crc kubenswrapper[4775]: I0123 14:23:42.367302 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x42sf\" (UniqueName: \"kubernetes.io/projected/498646da-c28d-4b9c-b61b-cd3c3b59455d-kube-api-access-x42sf\") pod \"root-account-create-update-4sr9v\" (UID: \"498646da-c28d-4b9c-b61b-cd3c3b59455d\") " pod="nova-kuttl-default/root-account-create-update-4sr9v" Jan 23 14:23:42 crc kubenswrapper[4775]: I0123 14:23:42.470506 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/498646da-c28d-4b9c-b61b-cd3c3b59455d-operator-scripts\") pod \"root-account-create-update-4sr9v\" (UID: \"498646da-c28d-4b9c-b61b-cd3c3b59455d\") " pod="nova-kuttl-default/root-account-create-update-4sr9v" Jan 23 14:23:42 crc kubenswrapper[4775]: I0123 14:23:42.471046 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x42sf\" (UniqueName: \"kubernetes.io/projected/498646da-c28d-4b9c-b61b-cd3c3b59455d-kube-api-access-x42sf\") pod \"root-account-create-update-4sr9v\" (UID: \"498646da-c28d-4b9c-b61b-cd3c3b59455d\") " pod="nova-kuttl-default/root-account-create-update-4sr9v" Jan 23 14:23:42 crc kubenswrapper[4775]: I0123 14:23:42.472069 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/498646da-c28d-4b9c-b61b-cd3c3b59455d-operator-scripts\") pod \"root-account-create-update-4sr9v\" (UID: \"498646da-c28d-4b9c-b61b-cd3c3b59455d\") " pod="nova-kuttl-default/root-account-create-update-4sr9v" Jan 23 14:23:42 crc kubenswrapper[4775]: I0123 14:23:42.502458 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x42sf\" (UniqueName: \"kubernetes.io/projected/498646da-c28d-4b9c-b61b-cd3c3b59455d-kube-api-access-x42sf\") pod \"root-account-create-update-4sr9v\" (UID: \"498646da-c28d-4b9c-b61b-cd3c3b59455d\") " pod="nova-kuttl-default/root-account-create-update-4sr9v" Jan 23 14:23:42 crc kubenswrapper[4775]: I0123 14:23:42.526797 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-4sr9v" Jan 23 14:23:42 crc kubenswrapper[4775]: I0123 14:23:42.955026 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:23:43 crc kubenswrapper[4775]: I0123 14:23:43.014353 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/root-account-create-update-4sr9v"] Jan 23 14:23:43 crc kubenswrapper[4775]: I0123 14:23:43.842902 4775 generic.go:334] "Generic (PLEG): container finished" podID="498646da-c28d-4b9c-b61b-cd3c3b59455d" containerID="e4d3d7427f456db9c410656944ad8601abb63e17de245cf5ef8fa44d9943c71d" exitCode=0 Jan 23 14:23:43 crc kubenswrapper[4775]: I0123 14:23:43.842981 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-4sr9v" event={"ID":"498646da-c28d-4b9c-b61b-cd3c3b59455d","Type":"ContainerDied","Data":"e4d3d7427f456db9c410656944ad8601abb63e17de245cf5ef8fa44d9943c71d"} Jan 23 14:23:43 crc kubenswrapper[4775]: I0123 14:23:43.843288 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-4sr9v" event={"ID":"498646da-c28d-4b9c-b61b-cd3c3b59455d","Type":"ContainerStarted","Data":"8a1e3102ab3678ec9a55bd89768797ae82d5c544972ca2b935a13b2efcc69545"} Jan 23 14:23:45 crc kubenswrapper[4775]: I0123 14:23:45.236334 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-4sr9v" Jan 23 14:23:45 crc kubenswrapper[4775]: I0123 14:23:45.313793 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x42sf\" (UniqueName: \"kubernetes.io/projected/498646da-c28d-4b9c-b61b-cd3c3b59455d-kube-api-access-x42sf\") pod \"498646da-c28d-4b9c-b61b-cd3c3b59455d\" (UID: \"498646da-c28d-4b9c-b61b-cd3c3b59455d\") " Jan 23 14:23:45 crc kubenswrapper[4775]: I0123 14:23:45.313985 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/498646da-c28d-4b9c-b61b-cd3c3b59455d-operator-scripts\") pod \"498646da-c28d-4b9c-b61b-cd3c3b59455d\" (UID: \"498646da-c28d-4b9c-b61b-cd3c3b59455d\") " Jan 23 14:23:45 crc kubenswrapper[4775]: I0123 14:23:45.315005 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/498646da-c28d-4b9c-b61b-cd3c3b59455d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "498646da-c28d-4b9c-b61b-cd3c3b59455d" (UID: "498646da-c28d-4b9c-b61b-cd3c3b59455d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:23:45 crc kubenswrapper[4775]: I0123 14:23:45.322668 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/498646da-c28d-4b9c-b61b-cd3c3b59455d-kube-api-access-x42sf" (OuterVolumeSpecName: "kube-api-access-x42sf") pod "498646da-c28d-4b9c-b61b-cd3c3b59455d" (UID: "498646da-c28d-4b9c-b61b-cd3c3b59455d"). InnerVolumeSpecName "kube-api-access-x42sf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:23:45 crc kubenswrapper[4775]: I0123 14:23:45.415486 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x42sf\" (UniqueName: \"kubernetes.io/projected/498646da-c28d-4b9c-b61b-cd3c3b59455d-kube-api-access-x42sf\") on node \"crc\" DevicePath \"\"" Jan 23 14:23:45 crc kubenswrapper[4775]: I0123 14:23:45.415788 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/498646da-c28d-4b9c-b61b-cd3c3b59455d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:23:45 crc kubenswrapper[4775]: I0123 14:23:45.863103 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-4sr9v" event={"ID":"498646da-c28d-4b9c-b61b-cd3c3b59455d","Type":"ContainerDied","Data":"8a1e3102ab3678ec9a55bd89768797ae82d5c544972ca2b935a13b2efcc69545"} Jan 23 14:23:45 crc kubenswrapper[4775]: I0123 14:23:45.863361 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a1e3102ab3678ec9a55bd89768797ae82d5c544972ca2b935a13b2efcc69545" Jan 23 14:23:45 crc kubenswrapper[4775]: I0123 14:23:45.863315 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-4sr9v" Jan 23 14:23:48 crc kubenswrapper[4775]: I0123 14:23:48.681660 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/root-account-create-update-4sr9v"] Jan 23 14:23:48 crc kubenswrapper[4775]: I0123 14:23:48.693200 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/root-account-create-update-4sr9v"] Jan 23 14:23:49 crc kubenswrapper[4775]: I0123 14:23:49.728970 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="498646da-c28d-4b9c-b61b-cd3c3b59455d" path="/var/lib/kubelet/pods/498646da-c28d-4b9c-b61b-cd3c3b59455d/volumes" Jan 23 14:23:52 crc kubenswrapper[4775]: I0123 14:23:52.957354 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/rabbitmq-server-0" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.227561 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/rabbitmq-broadcaster-server-0" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.685415 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/root-account-create-update-6bcp5"] Jan 23 14:23:53 crc kubenswrapper[4775]: E0123 14:23:53.685777 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="498646da-c28d-4b9c-b61b-cd3c3b59455d" containerName="mariadb-account-create-update" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.685791 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="498646da-c28d-4b9c-b61b-cd3c3b59455d" containerName="mariadb-account-create-update" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.685986 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="498646da-c28d-4b9c-b61b-cd3c3b59455d" containerName="mariadb-account-create-update" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.686503 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-6bcp5" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.688576 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"openstack-cell1-mariadb-root-db-secret" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.700409 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/root-account-create-update-6bcp5"] Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.727059 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-db-sync-2qsr9"] Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.728510 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-sync-2qsr9" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.737170 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-scripts" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.737376 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-keystone-dockercfg-p9s8k" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.737410 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.737625 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-config-data" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.752087 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-db-sync-2qsr9"] Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.773631 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwnwl\" (UniqueName: \"kubernetes.io/projected/ccc48032-9af5-4d79-bc89-f7d576911b23-kube-api-access-nwnwl\") pod \"root-account-create-update-6bcp5\" (UID: \"ccc48032-9af5-4d79-bc89-f7d576911b23\") " pod="nova-kuttl-default/root-account-create-update-6bcp5" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.773682 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm74b\" (UniqueName: \"kubernetes.io/projected/2c017749-eae9-4edd-91eb-21b25275a986-kube-api-access-mm74b\") pod \"keystone-db-sync-2qsr9\" (UID: \"2c017749-eae9-4edd-91eb-21b25275a986\") " pod="nova-kuttl-default/keystone-db-sync-2qsr9" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.773732 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c017749-eae9-4edd-91eb-21b25275a986-config-data\") pod \"keystone-db-sync-2qsr9\" (UID: \"2c017749-eae9-4edd-91eb-21b25275a986\") " pod="nova-kuttl-default/keystone-db-sync-2qsr9" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.774083 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c017749-eae9-4edd-91eb-21b25275a986-combined-ca-bundle\") pod \"keystone-db-sync-2qsr9\" (UID: \"2c017749-eae9-4edd-91eb-21b25275a986\") " pod="nova-kuttl-default/keystone-db-sync-2qsr9" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.774147 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccc48032-9af5-4d79-bc89-f7d576911b23-operator-scripts\") pod \"root-account-create-update-6bcp5\" (UID: \"ccc48032-9af5-4d79-bc89-f7d576911b23\") " pod="nova-kuttl-default/root-account-create-update-6bcp5" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.875056 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/rabbitmq-cell1-server-0" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.875245 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwnwl\" (UniqueName: \"kubernetes.io/projected/ccc48032-9af5-4d79-bc89-f7d576911b23-kube-api-access-nwnwl\") pod \"root-account-create-update-6bcp5\" (UID: \"ccc48032-9af5-4d79-bc89-f7d576911b23\") " pod="nova-kuttl-default/root-account-create-update-6bcp5" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.875308 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mm74b\" (UniqueName: \"kubernetes.io/projected/2c017749-eae9-4edd-91eb-21b25275a986-kube-api-access-mm74b\") pod \"keystone-db-sync-2qsr9\" (UID: \"2c017749-eae9-4edd-91eb-21b25275a986\") " pod="nova-kuttl-default/keystone-db-sync-2qsr9" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.875349 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c017749-eae9-4edd-91eb-21b25275a986-config-data\") pod \"keystone-db-sync-2qsr9\" (UID: \"2c017749-eae9-4edd-91eb-21b25275a986\") " pod="nova-kuttl-default/keystone-db-sync-2qsr9" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.875420 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c017749-eae9-4edd-91eb-21b25275a986-combined-ca-bundle\") pod \"keystone-db-sync-2qsr9\" (UID: \"2c017749-eae9-4edd-91eb-21b25275a986\") " pod="nova-kuttl-default/keystone-db-sync-2qsr9" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.875440 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccc48032-9af5-4d79-bc89-f7d576911b23-operator-scripts\") pod \"root-account-create-update-6bcp5\" (UID: \"ccc48032-9af5-4d79-bc89-f7d576911b23\") " pod="nova-kuttl-default/root-account-create-update-6bcp5" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.876105 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccc48032-9af5-4d79-bc89-f7d576911b23-operator-scripts\") pod \"root-account-create-update-6bcp5\" (UID: \"ccc48032-9af5-4d79-bc89-f7d576911b23\") " pod="nova-kuttl-default/root-account-create-update-6bcp5" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.881011 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c017749-eae9-4edd-91eb-21b25275a986-combined-ca-bundle\") pod \"keystone-db-sync-2qsr9\" (UID: \"2c017749-eae9-4edd-91eb-21b25275a986\") " pod="nova-kuttl-default/keystone-db-sync-2qsr9" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.881538 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c017749-eae9-4edd-91eb-21b25275a986-config-data\") pod \"keystone-db-sync-2qsr9\" (UID: \"2c017749-eae9-4edd-91eb-21b25275a986\") " pod="nova-kuttl-default/keystone-db-sync-2qsr9" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.894003 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mm74b\" (UniqueName: \"kubernetes.io/projected/2c017749-eae9-4edd-91eb-21b25275a986-kube-api-access-mm74b\") pod \"keystone-db-sync-2qsr9\" (UID: \"2c017749-eae9-4edd-91eb-21b25275a986\") " pod="nova-kuttl-default/keystone-db-sync-2qsr9" Jan 23 14:23:53 crc kubenswrapper[4775]: I0123 14:23:53.910208 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwnwl\" (UniqueName: \"kubernetes.io/projected/ccc48032-9af5-4d79-bc89-f7d576911b23-kube-api-access-nwnwl\") pod \"root-account-create-update-6bcp5\" (UID: \"ccc48032-9af5-4d79-bc89-f7d576911b23\") " pod="nova-kuttl-default/root-account-create-update-6bcp5" Jan 23 14:23:54 crc kubenswrapper[4775]: I0123 14:23:54.009087 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-6bcp5" Jan 23 14:23:54 crc kubenswrapper[4775]: I0123 14:23:54.055914 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-sync-2qsr9" Jan 23 14:23:54 crc kubenswrapper[4775]: I0123 14:23:54.465855 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/root-account-create-update-6bcp5"] Jan 23 14:23:54 crc kubenswrapper[4775]: I0123 14:23:54.521634 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-db-sync-2qsr9"] Jan 23 14:23:54 crc kubenswrapper[4775]: I0123 14:23:54.953081 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-sync-2qsr9" event={"ID":"2c017749-eae9-4edd-91eb-21b25275a986","Type":"ContainerStarted","Data":"4aabf4a53eeee033e98e15a63c87d0b75cdd0f192594e5d631a3fe9af880ef88"} Jan 23 14:23:54 crc kubenswrapper[4775]: I0123 14:23:54.955166 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-6bcp5" event={"ID":"ccc48032-9af5-4d79-bc89-f7d576911b23","Type":"ContainerStarted","Data":"4579a5ec0627d03f09f3dda4fc68f8fb4e44af53895a0e8c9b0a26eb695f55d2"} Jan 23 14:23:54 crc kubenswrapper[4775]: I0123 14:23:54.955214 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-6bcp5" event={"ID":"ccc48032-9af5-4d79-bc89-f7d576911b23","Type":"ContainerStarted","Data":"59ad54a1ae648ee96b97185ba8d47f1e47c69728543f57078233c5beccc4b8de"} Jan 23 14:23:55 crc kubenswrapper[4775]: E0123 14:23:55.394579 4775 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.177:46714->38.102.83.177:38819: write tcp 38.102.83.177:46714->38.102.83.177:38819: write: broken pipe Jan 23 14:23:55 crc kubenswrapper[4775]: I0123 14:23:55.966330 4775 generic.go:334] "Generic (PLEG): container finished" podID="ccc48032-9af5-4d79-bc89-f7d576911b23" containerID="4579a5ec0627d03f09f3dda4fc68f8fb4e44af53895a0e8c9b0a26eb695f55d2" exitCode=0 Jan 23 14:23:55 crc kubenswrapper[4775]: I0123 14:23:55.966375 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-6bcp5" event={"ID":"ccc48032-9af5-4d79-bc89-f7d576911b23","Type":"ContainerDied","Data":"4579a5ec0627d03f09f3dda4fc68f8fb4e44af53895a0e8c9b0a26eb695f55d2"} Jan 23 14:23:56 crc kubenswrapper[4775]: I0123 14:23:56.243553 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-6bcp5" Jan 23 14:23:56 crc kubenswrapper[4775]: I0123 14:23:56.318484 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwnwl\" (UniqueName: \"kubernetes.io/projected/ccc48032-9af5-4d79-bc89-f7d576911b23-kube-api-access-nwnwl\") pod \"ccc48032-9af5-4d79-bc89-f7d576911b23\" (UID: \"ccc48032-9af5-4d79-bc89-f7d576911b23\") " Jan 23 14:23:56 crc kubenswrapper[4775]: I0123 14:23:56.318679 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccc48032-9af5-4d79-bc89-f7d576911b23-operator-scripts\") pod \"ccc48032-9af5-4d79-bc89-f7d576911b23\" (UID: \"ccc48032-9af5-4d79-bc89-f7d576911b23\") " Jan 23 14:23:56 crc kubenswrapper[4775]: I0123 14:23:56.319385 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccc48032-9af5-4d79-bc89-f7d576911b23-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ccc48032-9af5-4d79-bc89-f7d576911b23" (UID: "ccc48032-9af5-4d79-bc89-f7d576911b23"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:23:56 crc kubenswrapper[4775]: I0123 14:23:56.323714 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccc48032-9af5-4d79-bc89-f7d576911b23-kube-api-access-nwnwl" (OuterVolumeSpecName: "kube-api-access-nwnwl") pod "ccc48032-9af5-4d79-bc89-f7d576911b23" (UID: "ccc48032-9af5-4d79-bc89-f7d576911b23"). InnerVolumeSpecName "kube-api-access-nwnwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:23:56 crc kubenswrapper[4775]: I0123 14:23:56.420992 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwnwl\" (UniqueName: \"kubernetes.io/projected/ccc48032-9af5-4d79-bc89-f7d576911b23-kube-api-access-nwnwl\") on node \"crc\" DevicePath \"\"" Jan 23 14:23:56 crc kubenswrapper[4775]: I0123 14:23:56.421557 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccc48032-9af5-4d79-bc89-f7d576911b23-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:23:56 crc kubenswrapper[4775]: I0123 14:23:56.978683 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/root-account-create-update-6bcp5" event={"ID":"ccc48032-9af5-4d79-bc89-f7d576911b23","Type":"ContainerDied","Data":"59ad54a1ae648ee96b97185ba8d47f1e47c69728543f57078233c5beccc4b8de"} Jan 23 14:23:56 crc kubenswrapper[4775]: I0123 14:23:56.978723 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59ad54a1ae648ee96b97185ba8d47f1e47c69728543f57078233c5beccc4b8de" Jan 23 14:23:56 crc kubenswrapper[4775]: I0123 14:23:56.978799 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/root-account-create-update-6bcp5" Jan 23 14:24:01 crc kubenswrapper[4775]: I0123 14:24:01.011349 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-sync-2qsr9" event={"ID":"2c017749-eae9-4edd-91eb-21b25275a986","Type":"ContainerStarted","Data":"b4c1b23769a70549b5013f743139c0324d53830c016cc7b8320ef98ddc16b647"} Jan 23 14:24:04 crc kubenswrapper[4775]: I0123 14:24:04.045121 4775 generic.go:334] "Generic (PLEG): container finished" podID="2c017749-eae9-4edd-91eb-21b25275a986" containerID="b4c1b23769a70549b5013f743139c0324d53830c016cc7b8320ef98ddc16b647" exitCode=0 Jan 23 14:24:04 crc kubenswrapper[4775]: I0123 14:24:04.045248 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-sync-2qsr9" event={"ID":"2c017749-eae9-4edd-91eb-21b25275a986","Type":"ContainerDied","Data":"b4c1b23769a70549b5013f743139c0324d53830c016cc7b8320ef98ddc16b647"} Jan 23 14:24:05 crc kubenswrapper[4775]: I0123 14:24:05.490866 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-sync-2qsr9" Jan 23 14:24:05 crc kubenswrapper[4775]: I0123 14:24:05.574063 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c017749-eae9-4edd-91eb-21b25275a986-config-data\") pod \"2c017749-eae9-4edd-91eb-21b25275a986\" (UID: \"2c017749-eae9-4edd-91eb-21b25275a986\") " Jan 23 14:24:05 crc kubenswrapper[4775]: I0123 14:24:05.574217 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mm74b\" (UniqueName: \"kubernetes.io/projected/2c017749-eae9-4edd-91eb-21b25275a986-kube-api-access-mm74b\") pod \"2c017749-eae9-4edd-91eb-21b25275a986\" (UID: \"2c017749-eae9-4edd-91eb-21b25275a986\") " Jan 23 14:24:05 crc kubenswrapper[4775]: I0123 14:24:05.574250 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c017749-eae9-4edd-91eb-21b25275a986-combined-ca-bundle\") pod \"2c017749-eae9-4edd-91eb-21b25275a986\" (UID: \"2c017749-eae9-4edd-91eb-21b25275a986\") " Jan 23 14:24:05 crc kubenswrapper[4775]: I0123 14:24:05.580977 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c017749-eae9-4edd-91eb-21b25275a986-kube-api-access-mm74b" (OuterVolumeSpecName: "kube-api-access-mm74b") pod "2c017749-eae9-4edd-91eb-21b25275a986" (UID: "2c017749-eae9-4edd-91eb-21b25275a986"). InnerVolumeSpecName "kube-api-access-mm74b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:24:05 crc kubenswrapper[4775]: I0123 14:24:05.601778 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c017749-eae9-4edd-91eb-21b25275a986-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2c017749-eae9-4edd-91eb-21b25275a986" (UID: "2c017749-eae9-4edd-91eb-21b25275a986"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:24:05 crc kubenswrapper[4775]: I0123 14:24:05.619409 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c017749-eae9-4edd-91eb-21b25275a986-config-data" (OuterVolumeSpecName: "config-data") pod "2c017749-eae9-4edd-91eb-21b25275a986" (UID: "2c017749-eae9-4edd-91eb-21b25275a986"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:24:05 crc kubenswrapper[4775]: I0123 14:24:05.675514 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2c017749-eae9-4edd-91eb-21b25275a986-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:24:05 crc kubenswrapper[4775]: I0123 14:24:05.675553 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mm74b\" (UniqueName: \"kubernetes.io/projected/2c017749-eae9-4edd-91eb-21b25275a986-kube-api-access-mm74b\") on node \"crc\" DevicePath \"\"" Jan 23 14:24:05 crc kubenswrapper[4775]: I0123 14:24:05.675567 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2c017749-eae9-4edd-91eb-21b25275a986-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.064614 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-db-sync-2qsr9" event={"ID":"2c017749-eae9-4edd-91eb-21b25275a986","Type":"ContainerDied","Data":"4aabf4a53eeee033e98e15a63c87d0b75cdd0f192594e5d631a3fe9af880ef88"} Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.064705 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4aabf4a53eeee033e98e15a63c87d0b75cdd0f192594e5d631a3fe9af880ef88" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.064646 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-db-sync-2qsr9" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.304974 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-bootstrap-czlxx"] Jan 23 14:24:06 crc kubenswrapper[4775]: E0123 14:24:06.305395 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccc48032-9af5-4d79-bc89-f7d576911b23" containerName="mariadb-account-create-update" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.305415 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccc48032-9af5-4d79-bc89-f7d576911b23" containerName="mariadb-account-create-update" Jan 23 14:24:06 crc kubenswrapper[4775]: E0123 14:24:06.305433 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c017749-eae9-4edd-91eb-21b25275a986" containerName="keystone-db-sync" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.305441 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c017749-eae9-4edd-91eb-21b25275a986" containerName="keystone-db-sync" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.305606 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccc48032-9af5-4d79-bc89-f7d576911b23" containerName="mariadb-account-create-update" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.305626 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c017749-eae9-4edd-91eb-21b25275a986" containerName="keystone-db-sync" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.306244 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-czlxx" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.311288 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"osp-secret" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.311757 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-scripts" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.312096 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-config-data" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.312375 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-keystone-dockercfg-p9s8k" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.312608 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.335194 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-czlxx"] Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.384552 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-credential-keys\") pod \"keystone-bootstrap-czlxx\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " pod="nova-kuttl-default/keystone-bootstrap-czlxx" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.384617 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-fernet-keys\") pod \"keystone-bootstrap-czlxx\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " pod="nova-kuttl-default/keystone-bootstrap-czlxx" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.384723 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-combined-ca-bundle\") pod \"keystone-bootstrap-czlxx\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " pod="nova-kuttl-default/keystone-bootstrap-czlxx" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.384757 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njhlw\" (UniqueName: \"kubernetes.io/projected/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-kube-api-access-njhlw\") pod \"keystone-bootstrap-czlxx\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " pod="nova-kuttl-default/keystone-bootstrap-czlxx" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.384906 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-scripts\") pod \"keystone-bootstrap-czlxx\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " pod="nova-kuttl-default/keystone-bootstrap-czlxx" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.384959 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-config-data\") pod \"keystone-bootstrap-czlxx\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " pod="nova-kuttl-default/keystone-bootstrap-czlxx" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.457515 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/placement-db-sync-sgnh6"] Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.458632 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-sync-sgnh6" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.460226 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-scripts" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.460402 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-placement-dockercfg-nmmns" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.461269 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-config-data" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.475954 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-db-sync-sgnh6"] Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.486177 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-credential-keys\") pod \"keystone-bootstrap-czlxx\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " pod="nova-kuttl-default/keystone-bootstrap-czlxx" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.486446 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-fernet-keys\") pod \"keystone-bootstrap-czlxx\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " pod="nova-kuttl-default/keystone-bootstrap-czlxx" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.486568 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-combined-ca-bundle\") pod \"keystone-bootstrap-czlxx\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " pod="nova-kuttl-default/keystone-bootstrap-czlxx" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.486597 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njhlw\" (UniqueName: \"kubernetes.io/projected/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-kube-api-access-njhlw\") pod \"keystone-bootstrap-czlxx\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " pod="nova-kuttl-default/keystone-bootstrap-czlxx" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.486739 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-scripts\") pod \"keystone-bootstrap-czlxx\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " pod="nova-kuttl-default/keystone-bootstrap-czlxx" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.486779 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-config-data\") pod \"keystone-bootstrap-czlxx\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " pod="nova-kuttl-default/keystone-bootstrap-czlxx" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.489878 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-combined-ca-bundle\") pod \"keystone-bootstrap-czlxx\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " pod="nova-kuttl-default/keystone-bootstrap-czlxx" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.489937 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-scripts\") pod \"keystone-bootstrap-czlxx\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " pod="nova-kuttl-default/keystone-bootstrap-czlxx" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.489949 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-fernet-keys\") pod \"keystone-bootstrap-czlxx\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " pod="nova-kuttl-default/keystone-bootstrap-czlxx" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.494922 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-config-data\") pod \"keystone-bootstrap-czlxx\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " pod="nova-kuttl-default/keystone-bootstrap-czlxx" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.515745 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njhlw\" (UniqueName: \"kubernetes.io/projected/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-kube-api-access-njhlw\") pod \"keystone-bootstrap-czlxx\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " pod="nova-kuttl-default/keystone-bootstrap-czlxx" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.516613 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-credential-keys\") pod \"keystone-bootstrap-czlxx\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " pod="nova-kuttl-default/keystone-bootstrap-czlxx" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.587633 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-logs\") pod \"placement-db-sync-sgnh6\" (UID: \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\") " pod="nova-kuttl-default/placement-db-sync-sgnh6" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.587679 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-config-data\") pod \"placement-db-sync-sgnh6\" (UID: \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\") " pod="nova-kuttl-default/placement-db-sync-sgnh6" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.587709 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2pnr\" (UniqueName: \"kubernetes.io/projected/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-kube-api-access-s2pnr\") pod \"placement-db-sync-sgnh6\" (UID: \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\") " pod="nova-kuttl-default/placement-db-sync-sgnh6" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.587874 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-combined-ca-bundle\") pod \"placement-db-sync-sgnh6\" (UID: \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\") " pod="nova-kuttl-default/placement-db-sync-sgnh6" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.587999 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-scripts\") pod \"placement-db-sync-sgnh6\" (UID: \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\") " pod="nova-kuttl-default/placement-db-sync-sgnh6" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.633720 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-czlxx" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.690270 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2pnr\" (UniqueName: \"kubernetes.io/projected/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-kube-api-access-s2pnr\") pod \"placement-db-sync-sgnh6\" (UID: \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\") " pod="nova-kuttl-default/placement-db-sync-sgnh6" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.691155 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-combined-ca-bundle\") pod \"placement-db-sync-sgnh6\" (UID: \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\") " pod="nova-kuttl-default/placement-db-sync-sgnh6" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.691322 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-scripts\") pod \"placement-db-sync-sgnh6\" (UID: \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\") " pod="nova-kuttl-default/placement-db-sync-sgnh6" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.691423 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-logs\") pod \"placement-db-sync-sgnh6\" (UID: \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\") " pod="nova-kuttl-default/placement-db-sync-sgnh6" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.691449 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-config-data\") pod \"placement-db-sync-sgnh6\" (UID: \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\") " pod="nova-kuttl-default/placement-db-sync-sgnh6" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.691951 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-logs\") pod \"placement-db-sync-sgnh6\" (UID: \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\") " pod="nova-kuttl-default/placement-db-sync-sgnh6" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.697221 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-combined-ca-bundle\") pod \"placement-db-sync-sgnh6\" (UID: \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\") " pod="nova-kuttl-default/placement-db-sync-sgnh6" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.703219 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-scripts\") pod \"placement-db-sync-sgnh6\" (UID: \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\") " pod="nova-kuttl-default/placement-db-sync-sgnh6" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.708450 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-config-data\") pod \"placement-db-sync-sgnh6\" (UID: \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\") " pod="nova-kuttl-default/placement-db-sync-sgnh6" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.715925 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2pnr\" (UniqueName: \"kubernetes.io/projected/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-kube-api-access-s2pnr\") pod \"placement-db-sync-sgnh6\" (UID: \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\") " pod="nova-kuttl-default/placement-db-sync-sgnh6" Jan 23 14:24:06 crc kubenswrapper[4775]: I0123 14:24:06.786128 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-sync-sgnh6" Jan 23 14:24:07 crc kubenswrapper[4775]: I0123 14:24:07.105488 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-czlxx"] Jan 23 14:24:07 crc kubenswrapper[4775]: I0123 14:24:07.212482 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-db-sync-sgnh6"] Jan 23 14:24:07 crc kubenswrapper[4775]: W0123 14:24:07.234540 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc22eb7b9_6c07_4edc_a7f7_9e9c4f5acfe6.slice/crio-eb011038cbdcefd0fd0ed9e38fe31f52d0c49640fb62d041740e20dc277e4251 WatchSource:0}: Error finding container eb011038cbdcefd0fd0ed9e38fe31f52d0c49640fb62d041740e20dc277e4251: Status 404 returned error can't find the container with id eb011038cbdcefd0fd0ed9e38fe31f52d0c49640fb62d041740e20dc277e4251 Jan 23 14:24:08 crc kubenswrapper[4775]: I0123 14:24:08.087570 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-czlxx" event={"ID":"730f1c4d-a54d-4644-bf76-c3c4541e8f6d","Type":"ContainerStarted","Data":"2ee19493765c2e784fbd1d7e401c527b26da5317dbb06d292407f1d608775812"} Jan 23 14:24:08 crc kubenswrapper[4775]: I0123 14:24:08.087627 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-czlxx" event={"ID":"730f1c4d-a54d-4644-bf76-c3c4541e8f6d","Type":"ContainerStarted","Data":"bbf1ece63750a1b08bf5d8d8b6b6433c61252996ec4f59fe4318728341c380cb"} Jan 23 14:24:08 crc kubenswrapper[4775]: I0123 14:24:08.090039 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-sync-sgnh6" event={"ID":"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6","Type":"ContainerStarted","Data":"eb011038cbdcefd0fd0ed9e38fe31f52d0c49640fb62d041740e20dc277e4251"} Jan 23 14:24:08 crc kubenswrapper[4775]: I0123 14:24:08.137091 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-bootstrap-czlxx" podStartSLOduration=2.137066597 podStartE2EDuration="2.137066597s" podCreationTimestamp="2026-01-23 14:24:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:24:08.127366504 +0000 UTC m=+1195.122195284" watchObservedRunningTime="2026-01-23 14:24:08.137066597 +0000 UTC m=+1195.131895347" Jan 23 14:24:10 crc kubenswrapper[4775]: I0123 14:24:10.106205 4775 generic.go:334] "Generic (PLEG): container finished" podID="730f1c4d-a54d-4644-bf76-c3c4541e8f6d" containerID="2ee19493765c2e784fbd1d7e401c527b26da5317dbb06d292407f1d608775812" exitCode=0 Jan 23 14:24:10 crc kubenswrapper[4775]: I0123 14:24:10.106404 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-czlxx" event={"ID":"730f1c4d-a54d-4644-bf76-c3c4541e8f6d","Type":"ContainerDied","Data":"2ee19493765c2e784fbd1d7e401c527b26da5317dbb06d292407f1d608775812"} Jan 23 14:24:11 crc kubenswrapper[4775]: I0123 14:24:11.119637 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-sync-sgnh6" event={"ID":"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6","Type":"ContainerStarted","Data":"29238591798a36dbd48ca4872cdddc49396b7b446c5f60340f5519ed8229bff3"} Jan 23 14:24:11 crc kubenswrapper[4775]: I0123 14:24:11.144704 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/placement-db-sync-sgnh6" podStartSLOduration=2.154391547 podStartE2EDuration="5.144678869s" podCreationTimestamp="2026-01-23 14:24:06 +0000 UTC" firstStartedPulling="2026-01-23 14:24:07.238173246 +0000 UTC m=+1194.233001986" lastFinishedPulling="2026-01-23 14:24:10.228460568 +0000 UTC m=+1197.223289308" observedRunningTime="2026-01-23 14:24:11.143776694 +0000 UTC m=+1198.138605524" watchObservedRunningTime="2026-01-23 14:24:11.144678869 +0000 UTC m=+1198.139507649" Jan 23 14:24:11 crc kubenswrapper[4775]: I0123 14:24:11.611584 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-czlxx" Jan 23 14:24:11 crc kubenswrapper[4775]: I0123 14:24:11.789093 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-combined-ca-bundle\") pod \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " Jan 23 14:24:11 crc kubenswrapper[4775]: I0123 14:24:11.789141 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-fernet-keys\") pod \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " Jan 23 14:24:11 crc kubenswrapper[4775]: I0123 14:24:11.789207 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-config-data\") pod \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " Jan 23 14:24:11 crc kubenswrapper[4775]: I0123 14:24:11.789229 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-credential-keys\") pod \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " Jan 23 14:24:11 crc kubenswrapper[4775]: I0123 14:24:11.789286 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njhlw\" (UniqueName: \"kubernetes.io/projected/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-kube-api-access-njhlw\") pod \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " Jan 23 14:24:11 crc kubenswrapper[4775]: I0123 14:24:11.789324 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-scripts\") pod \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\" (UID: \"730f1c4d-a54d-4644-bf76-c3c4541e8f6d\") " Jan 23 14:24:11 crc kubenswrapper[4775]: I0123 14:24:11.796792 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "730f1c4d-a54d-4644-bf76-c3c4541e8f6d" (UID: "730f1c4d-a54d-4644-bf76-c3c4541e8f6d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:24:11 crc kubenswrapper[4775]: I0123 14:24:11.796923 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-scripts" (OuterVolumeSpecName: "scripts") pod "730f1c4d-a54d-4644-bf76-c3c4541e8f6d" (UID: "730f1c4d-a54d-4644-bf76-c3c4541e8f6d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:24:11 crc kubenswrapper[4775]: I0123 14:24:11.797883 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "730f1c4d-a54d-4644-bf76-c3c4541e8f6d" (UID: "730f1c4d-a54d-4644-bf76-c3c4541e8f6d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:24:11 crc kubenswrapper[4775]: I0123 14:24:11.798247 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-kube-api-access-njhlw" (OuterVolumeSpecName: "kube-api-access-njhlw") pod "730f1c4d-a54d-4644-bf76-c3c4541e8f6d" (UID: "730f1c4d-a54d-4644-bf76-c3c4541e8f6d"). InnerVolumeSpecName "kube-api-access-njhlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:24:11 crc kubenswrapper[4775]: I0123 14:24:11.826917 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "730f1c4d-a54d-4644-bf76-c3c4541e8f6d" (UID: "730f1c4d-a54d-4644-bf76-c3c4541e8f6d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:24:11 crc kubenswrapper[4775]: I0123 14:24:11.829487 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-config-data" (OuterVolumeSpecName: "config-data") pod "730f1c4d-a54d-4644-bf76-c3c4541e8f6d" (UID: "730f1c4d-a54d-4644-bf76-c3c4541e8f6d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:24:11 crc kubenswrapper[4775]: I0123 14:24:11.892316 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:24:11 crc kubenswrapper[4775]: I0123 14:24:11.892377 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:24:11 crc kubenswrapper[4775]: I0123 14:24:11.892400 4775 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 23 14:24:11 crc kubenswrapper[4775]: I0123 14:24:11.892419 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:24:11 crc kubenswrapper[4775]: I0123 14:24:11.892437 4775 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 23 14:24:11 crc kubenswrapper[4775]: I0123 14:24:11.892459 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njhlw\" (UniqueName: \"kubernetes.io/projected/730f1c4d-a54d-4644-bf76-c3c4541e8f6d-kube-api-access-njhlw\") on node \"crc\" DevicePath \"\"" Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.130920 4775 generic.go:334] "Generic (PLEG): container finished" podID="c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6" containerID="29238591798a36dbd48ca4872cdddc49396b7b446c5f60340f5519ed8229bff3" exitCode=0 Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.131044 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-sync-sgnh6" event={"ID":"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6","Type":"ContainerDied","Data":"29238591798a36dbd48ca4872cdddc49396b7b446c5f60340f5519ed8229bff3"} Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.133239 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-czlxx" event={"ID":"730f1c4d-a54d-4644-bf76-c3c4541e8f6d","Type":"ContainerDied","Data":"bbf1ece63750a1b08bf5d8d8b6b6433c61252996ec4f59fe4318728341c380cb"} Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.133279 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbf1ece63750a1b08bf5d8d8b6b6433c61252996ec4f59fe4318728341c380cb" Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.133345 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-czlxx" Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.791182 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-czlxx"] Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.798092 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-czlxx"] Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.893737 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-bootstrap-6qmk5"] Jan 23 14:24:12 crc kubenswrapper[4775]: E0123 14:24:12.894384 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="730f1c4d-a54d-4644-bf76-c3c4541e8f6d" containerName="keystone-bootstrap" Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.894535 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="730f1c4d-a54d-4644-bf76-c3c4541e8f6d" containerName="keystone-bootstrap" Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.894922 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="730f1c4d-a54d-4644-bf76-c3c4541e8f6d" containerName="keystone-bootstrap" Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.895687 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-6qmk5" Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.898606 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-scripts" Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.898748 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"osp-secret" Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.900560 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone" Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.901056 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-config-data" Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.901056 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-keystone-dockercfg-p9s8k" Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.908383 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdxrc\" (UniqueName: \"kubernetes.io/projected/b5498924-f821-48fa-88a0-6d8c0c7c01de-kube-api-access-bdxrc\") pod \"keystone-bootstrap-6qmk5\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " pod="nova-kuttl-default/keystone-bootstrap-6qmk5" Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.908448 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-config-data\") pod \"keystone-bootstrap-6qmk5\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " pod="nova-kuttl-default/keystone-bootstrap-6qmk5" Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.908533 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-combined-ca-bundle\") pod \"keystone-bootstrap-6qmk5\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " pod="nova-kuttl-default/keystone-bootstrap-6qmk5" Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.908632 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-fernet-keys\") pod \"keystone-bootstrap-6qmk5\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " pod="nova-kuttl-default/keystone-bootstrap-6qmk5" Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.908677 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-scripts\") pod \"keystone-bootstrap-6qmk5\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " pod="nova-kuttl-default/keystone-bootstrap-6qmk5" Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.908710 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-credential-keys\") pod \"keystone-bootstrap-6qmk5\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " pod="nova-kuttl-default/keystone-bootstrap-6qmk5" Jan 23 14:24:12 crc kubenswrapper[4775]: I0123 14:24:12.926657 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-6qmk5"] Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.010209 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-config-data\") pod \"keystone-bootstrap-6qmk5\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " pod="nova-kuttl-default/keystone-bootstrap-6qmk5" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.011030 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-combined-ca-bundle\") pod \"keystone-bootstrap-6qmk5\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " pod="nova-kuttl-default/keystone-bootstrap-6qmk5" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.011137 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-fernet-keys\") pod \"keystone-bootstrap-6qmk5\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " pod="nova-kuttl-default/keystone-bootstrap-6qmk5" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.011185 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-scripts\") pod \"keystone-bootstrap-6qmk5\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " pod="nova-kuttl-default/keystone-bootstrap-6qmk5" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.011210 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-credential-keys\") pod \"keystone-bootstrap-6qmk5\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " pod="nova-kuttl-default/keystone-bootstrap-6qmk5" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.011302 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdxrc\" (UniqueName: \"kubernetes.io/projected/b5498924-f821-48fa-88a0-6d8c0c7c01de-kube-api-access-bdxrc\") pod \"keystone-bootstrap-6qmk5\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " pod="nova-kuttl-default/keystone-bootstrap-6qmk5" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.015665 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-scripts\") pod \"keystone-bootstrap-6qmk5\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " pod="nova-kuttl-default/keystone-bootstrap-6qmk5" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.016271 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-config-data\") pod \"keystone-bootstrap-6qmk5\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " pod="nova-kuttl-default/keystone-bootstrap-6qmk5" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.017475 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-credential-keys\") pod \"keystone-bootstrap-6qmk5\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " pod="nova-kuttl-default/keystone-bootstrap-6qmk5" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.018078 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-fernet-keys\") pod \"keystone-bootstrap-6qmk5\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " pod="nova-kuttl-default/keystone-bootstrap-6qmk5" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.018442 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-combined-ca-bundle\") pod \"keystone-bootstrap-6qmk5\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " pod="nova-kuttl-default/keystone-bootstrap-6qmk5" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.037339 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdxrc\" (UniqueName: \"kubernetes.io/projected/b5498924-f821-48fa-88a0-6d8c0c7c01de-kube-api-access-bdxrc\") pod \"keystone-bootstrap-6qmk5\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " pod="nova-kuttl-default/keystone-bootstrap-6qmk5" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.238024 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-6qmk5" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.457138 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-sync-sgnh6" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.521873 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-logs\") pod \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\" (UID: \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\") " Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.521924 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-scripts\") pod \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\" (UID: \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\") " Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.521942 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2pnr\" (UniqueName: \"kubernetes.io/projected/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-kube-api-access-s2pnr\") pod \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\" (UID: \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\") " Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.522658 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-logs" (OuterVolumeSpecName: "logs") pod "c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6" (UID: "c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.523159 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-config-data\") pod \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\" (UID: \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\") " Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.523188 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-combined-ca-bundle\") pod \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\" (UID: \"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6\") " Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.523377 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.552033 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-scripts" (OuterVolumeSpecName: "scripts") pod "c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6" (UID: "c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.552555 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-kube-api-access-s2pnr" (OuterVolumeSpecName: "kube-api-access-s2pnr") pod "c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6" (UID: "c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6"). InnerVolumeSpecName "kube-api-access-s2pnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.570010 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-config-data" (OuterVolumeSpecName: "config-data") pod "c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6" (UID: "c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.604020 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6" (UID: "c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.624749 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2pnr\" (UniqueName: \"kubernetes.io/projected/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-kube-api-access-s2pnr\") on node \"crc\" DevicePath \"\"" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.624814 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.624826 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.624837 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.702876 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-6qmk5"] Jan 23 14:24:13 crc kubenswrapper[4775]: W0123 14:24:13.710134 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5498924_f821_48fa_88a0_6d8c0c7c01de.slice/crio-cd52832bbef8dab9e3735f1d292a892ae5b426c73ea12a7d73386e0f32a43d37 WatchSource:0}: Error finding container cd52832bbef8dab9e3735f1d292a892ae5b426c73ea12a7d73386e0f32a43d37: Status 404 returned error can't find the container with id cd52832bbef8dab9e3735f1d292a892ae5b426c73ea12a7d73386e0f32a43d37 Jan 23 14:24:13 crc kubenswrapper[4775]: I0123 14:24:13.733457 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="730f1c4d-a54d-4644-bf76-c3c4541e8f6d" path="/var/lib/kubelet/pods/730f1c4d-a54d-4644-bf76-c3c4541e8f6d/volumes" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.150527 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-6qmk5" event={"ID":"b5498924-f821-48fa-88a0-6d8c0c7c01de","Type":"ContainerStarted","Data":"03bac1f849c95644ae09fd2e62cba3da4e7525c38066ec2837085c381ddd303a"} Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.150575 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-6qmk5" event={"ID":"b5498924-f821-48fa-88a0-6d8c0c7c01de","Type":"ContainerStarted","Data":"cd52832bbef8dab9e3735f1d292a892ae5b426c73ea12a7d73386e0f32a43d37"} Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.152952 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-db-sync-sgnh6" event={"ID":"c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6","Type":"ContainerDied","Data":"eb011038cbdcefd0fd0ed9e38fe31f52d0c49640fb62d041740e20dc277e4251"} Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.152985 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb011038cbdcefd0fd0ed9e38fe31f52d0c49640fb62d041740e20dc277e4251" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.153041 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-db-sync-sgnh6" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.186237 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-bootstrap-6qmk5" podStartSLOduration=2.186216238 podStartE2EDuration="2.186216238s" podCreationTimestamp="2026-01-23 14:24:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:24:14.179609371 +0000 UTC m=+1201.174438161" watchObservedRunningTime="2026-01-23 14:24:14.186216238 +0000 UTC m=+1201.181044998" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.268291 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/placement-7787b67bb8-psq7t"] Jan 23 14:24:14 crc kubenswrapper[4775]: E0123 14:24:14.268686 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6" containerName="placement-db-sync" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.268704 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6" containerName="placement-db-sync" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.268906 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6" containerName="placement-db-sync" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.269866 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-7787b67bb8-psq7t" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.271697 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-scripts" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.272758 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-placement-dockercfg-nmmns" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.273246 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-config-data" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.285405 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-7787b67bb8-psq7t"] Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.335447 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b653824-2e32-431a-8b16-f8687610c0fe-logs\") pod \"placement-7787b67bb8-psq7t\" (UID: \"6b653824-2e32-431a-8b16-f8687610c0fe\") " pod="nova-kuttl-default/placement-7787b67bb8-psq7t" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.335554 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b653824-2e32-431a-8b16-f8687610c0fe-config-data\") pod \"placement-7787b67bb8-psq7t\" (UID: \"6b653824-2e32-431a-8b16-f8687610c0fe\") " pod="nova-kuttl-default/placement-7787b67bb8-psq7t" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.335589 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b653824-2e32-431a-8b16-f8687610c0fe-combined-ca-bundle\") pod \"placement-7787b67bb8-psq7t\" (UID: \"6b653824-2e32-431a-8b16-f8687610c0fe\") " pod="nova-kuttl-default/placement-7787b67bb8-psq7t" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.335619 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b653824-2e32-431a-8b16-f8687610c0fe-scripts\") pod \"placement-7787b67bb8-psq7t\" (UID: \"6b653824-2e32-431a-8b16-f8687610c0fe\") " pod="nova-kuttl-default/placement-7787b67bb8-psq7t" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.335657 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5j4h\" (UniqueName: \"kubernetes.io/projected/6b653824-2e32-431a-8b16-f8687610c0fe-kube-api-access-h5j4h\") pod \"placement-7787b67bb8-psq7t\" (UID: \"6b653824-2e32-431a-8b16-f8687610c0fe\") " pod="nova-kuttl-default/placement-7787b67bb8-psq7t" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.437055 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b653824-2e32-431a-8b16-f8687610c0fe-logs\") pod \"placement-7787b67bb8-psq7t\" (UID: \"6b653824-2e32-431a-8b16-f8687610c0fe\") " pod="nova-kuttl-default/placement-7787b67bb8-psq7t" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.437214 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b653824-2e32-431a-8b16-f8687610c0fe-config-data\") pod \"placement-7787b67bb8-psq7t\" (UID: \"6b653824-2e32-431a-8b16-f8687610c0fe\") " pod="nova-kuttl-default/placement-7787b67bb8-psq7t" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.437264 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b653824-2e32-431a-8b16-f8687610c0fe-combined-ca-bundle\") pod \"placement-7787b67bb8-psq7t\" (UID: \"6b653824-2e32-431a-8b16-f8687610c0fe\") " pod="nova-kuttl-default/placement-7787b67bb8-psq7t" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.437305 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b653824-2e32-431a-8b16-f8687610c0fe-scripts\") pod \"placement-7787b67bb8-psq7t\" (UID: \"6b653824-2e32-431a-8b16-f8687610c0fe\") " pod="nova-kuttl-default/placement-7787b67bb8-psq7t" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.437370 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5j4h\" (UniqueName: \"kubernetes.io/projected/6b653824-2e32-431a-8b16-f8687610c0fe-kube-api-access-h5j4h\") pod \"placement-7787b67bb8-psq7t\" (UID: \"6b653824-2e32-431a-8b16-f8687610c0fe\") " pod="nova-kuttl-default/placement-7787b67bb8-psq7t" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.437873 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b653824-2e32-431a-8b16-f8687610c0fe-logs\") pod \"placement-7787b67bb8-psq7t\" (UID: \"6b653824-2e32-431a-8b16-f8687610c0fe\") " pod="nova-kuttl-default/placement-7787b67bb8-psq7t" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.444339 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b653824-2e32-431a-8b16-f8687610c0fe-scripts\") pod \"placement-7787b67bb8-psq7t\" (UID: \"6b653824-2e32-431a-8b16-f8687610c0fe\") " pod="nova-kuttl-default/placement-7787b67bb8-psq7t" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.444587 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b653824-2e32-431a-8b16-f8687610c0fe-combined-ca-bundle\") pod \"placement-7787b67bb8-psq7t\" (UID: \"6b653824-2e32-431a-8b16-f8687610c0fe\") " pod="nova-kuttl-default/placement-7787b67bb8-psq7t" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.445729 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b653824-2e32-431a-8b16-f8687610c0fe-config-data\") pod \"placement-7787b67bb8-psq7t\" (UID: \"6b653824-2e32-431a-8b16-f8687610c0fe\") " pod="nova-kuttl-default/placement-7787b67bb8-psq7t" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.464457 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5j4h\" (UniqueName: \"kubernetes.io/projected/6b653824-2e32-431a-8b16-f8687610c0fe-kube-api-access-h5j4h\") pod \"placement-7787b67bb8-psq7t\" (UID: \"6b653824-2e32-431a-8b16-f8687610c0fe\") " pod="nova-kuttl-default/placement-7787b67bb8-psq7t" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.601126 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"placement-placement-dockercfg-nmmns" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.609184 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/placement-7787b67bb8-psq7t" Jan 23 14:24:14 crc kubenswrapper[4775]: I0123 14:24:14.876756 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/placement-7787b67bb8-psq7t"] Jan 23 14:24:14 crc kubenswrapper[4775]: W0123 14:24:14.885699 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b653824_2e32_431a_8b16_f8687610c0fe.slice/crio-8411a2b6a590b98245ce0251a16f853f48c80d9a0dce0115f0fe75114b189fe8 WatchSource:0}: Error finding container 8411a2b6a590b98245ce0251a16f853f48c80d9a0dce0115f0fe75114b189fe8: Status 404 returned error can't find the container with id 8411a2b6a590b98245ce0251a16f853f48c80d9a0dce0115f0fe75114b189fe8 Jan 23 14:24:15 crc kubenswrapper[4775]: I0123 14:24:15.163621 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-7787b67bb8-psq7t" event={"ID":"6b653824-2e32-431a-8b16-f8687610c0fe","Type":"ContainerStarted","Data":"d66c47dd24a97c5e406bb8f8c2966868508dde888e26b413ba616252a0af9cfd"} Jan 23 14:24:15 crc kubenswrapper[4775]: I0123 14:24:15.163677 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-7787b67bb8-psq7t" event={"ID":"6b653824-2e32-431a-8b16-f8687610c0fe","Type":"ContainerStarted","Data":"8411a2b6a590b98245ce0251a16f853f48c80d9a0dce0115f0fe75114b189fe8"} Jan 23 14:24:16 crc kubenswrapper[4775]: I0123 14:24:16.177177 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/placement-7787b67bb8-psq7t" event={"ID":"6b653824-2e32-431a-8b16-f8687610c0fe","Type":"ContainerStarted","Data":"bcbb533ea799345eeb794af31254e6286aa09085890e15fe08353e9133460887"} Jan 23 14:24:16 crc kubenswrapper[4775]: I0123 14:24:16.177683 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/placement-7787b67bb8-psq7t" Jan 23 14:24:16 crc kubenswrapper[4775]: I0123 14:24:16.177711 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/placement-7787b67bb8-psq7t" Jan 23 14:24:16 crc kubenswrapper[4775]: I0123 14:24:16.215635 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/placement-7787b67bb8-psq7t" podStartSLOduration=2.215601177 podStartE2EDuration="2.215601177s" podCreationTimestamp="2026-01-23 14:24:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:24:16.210207755 +0000 UTC m=+1203.205036525" watchObservedRunningTime="2026-01-23 14:24:16.215601177 +0000 UTC m=+1203.210429957" Jan 23 14:24:17 crc kubenswrapper[4775]: I0123 14:24:17.185020 4775 generic.go:334] "Generic (PLEG): container finished" podID="b5498924-f821-48fa-88a0-6d8c0c7c01de" containerID="03bac1f849c95644ae09fd2e62cba3da4e7525c38066ec2837085c381ddd303a" exitCode=0 Jan 23 14:24:17 crc kubenswrapper[4775]: I0123 14:24:17.185110 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-6qmk5" event={"ID":"b5498924-f821-48fa-88a0-6d8c0c7c01de","Type":"ContainerDied","Data":"03bac1f849c95644ae09fd2e62cba3da4e7525c38066ec2837085c381ddd303a"} Jan 23 14:24:18 crc kubenswrapper[4775]: I0123 14:24:18.555283 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-6qmk5" Jan 23 14:24:18 crc kubenswrapper[4775]: I0123 14:24:18.619622 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdxrc\" (UniqueName: \"kubernetes.io/projected/b5498924-f821-48fa-88a0-6d8c0c7c01de-kube-api-access-bdxrc\") pod \"b5498924-f821-48fa-88a0-6d8c0c7c01de\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " Jan 23 14:24:18 crc kubenswrapper[4775]: I0123 14:24:18.620158 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-config-data\") pod \"b5498924-f821-48fa-88a0-6d8c0c7c01de\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " Jan 23 14:24:18 crc kubenswrapper[4775]: I0123 14:24:18.620365 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-fernet-keys\") pod \"b5498924-f821-48fa-88a0-6d8c0c7c01de\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " Jan 23 14:24:18 crc kubenswrapper[4775]: I0123 14:24:18.620691 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-scripts\") pod \"b5498924-f821-48fa-88a0-6d8c0c7c01de\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " Jan 23 14:24:18 crc kubenswrapper[4775]: I0123 14:24:18.621059 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-combined-ca-bundle\") pod \"b5498924-f821-48fa-88a0-6d8c0c7c01de\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " Jan 23 14:24:18 crc kubenswrapper[4775]: I0123 14:24:18.621361 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-credential-keys\") pod \"b5498924-f821-48fa-88a0-6d8c0c7c01de\" (UID: \"b5498924-f821-48fa-88a0-6d8c0c7c01de\") " Jan 23 14:24:18 crc kubenswrapper[4775]: I0123 14:24:18.626221 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5498924-f821-48fa-88a0-6d8c0c7c01de-kube-api-access-bdxrc" (OuterVolumeSpecName: "kube-api-access-bdxrc") pod "b5498924-f821-48fa-88a0-6d8c0c7c01de" (UID: "b5498924-f821-48fa-88a0-6d8c0c7c01de"). InnerVolumeSpecName "kube-api-access-bdxrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:24:18 crc kubenswrapper[4775]: I0123 14:24:18.626218 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-scripts" (OuterVolumeSpecName: "scripts") pod "b5498924-f821-48fa-88a0-6d8c0c7c01de" (UID: "b5498924-f821-48fa-88a0-6d8c0c7c01de"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:24:18 crc kubenswrapper[4775]: I0123 14:24:18.626700 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b5498924-f821-48fa-88a0-6d8c0c7c01de" (UID: "b5498924-f821-48fa-88a0-6d8c0c7c01de"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:24:18 crc kubenswrapper[4775]: I0123 14:24:18.628306 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b5498924-f821-48fa-88a0-6d8c0c7c01de" (UID: "b5498924-f821-48fa-88a0-6d8c0c7c01de"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:24:18 crc kubenswrapper[4775]: I0123 14:24:18.646031 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b5498924-f821-48fa-88a0-6d8c0c7c01de" (UID: "b5498924-f821-48fa-88a0-6d8c0c7c01de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:24:18 crc kubenswrapper[4775]: I0123 14:24:18.646786 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-config-data" (OuterVolumeSpecName: "config-data") pod "b5498924-f821-48fa-88a0-6d8c0c7c01de" (UID: "b5498924-f821-48fa-88a0-6d8c0c7c01de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:24:18 crc kubenswrapper[4775]: I0123 14:24:18.724057 4775 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 23 14:24:18 crc kubenswrapper[4775]: I0123 14:24:18.724116 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdxrc\" (UniqueName: \"kubernetes.io/projected/b5498924-f821-48fa-88a0-6d8c0c7c01de-kube-api-access-bdxrc\") on node \"crc\" DevicePath \"\"" Jan 23 14:24:18 crc kubenswrapper[4775]: I0123 14:24:18.724140 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:24:18 crc kubenswrapper[4775]: I0123 14:24:18.724158 4775 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 23 14:24:18 crc kubenswrapper[4775]: I0123 14:24:18.724175 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:24:18 crc kubenswrapper[4775]: I0123 14:24:18.724193 4775 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5498924-f821-48fa-88a0-6d8c0c7c01de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.208767 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-bootstrap-6qmk5" event={"ID":"b5498924-f821-48fa-88a0-6d8c0c7c01de","Type":"ContainerDied","Data":"cd52832bbef8dab9e3735f1d292a892ae5b426c73ea12a7d73386e0f32a43d37"} Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.208855 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd52832bbef8dab9e3735f1d292a892ae5b426c73ea12a7d73386e0f32a43d37" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.208978 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-bootstrap-6qmk5" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.439924 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/keystone-7d978f-gdlmv"] Jan 23 14:24:19 crc kubenswrapper[4775]: E0123 14:24:19.440599 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5498924-f821-48fa-88a0-6d8c0c7c01de" containerName="keystone-bootstrap" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.440645 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5498924-f821-48fa-88a0-6d8c0c7c01de" containerName="keystone-bootstrap" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.441093 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5498924-f821-48fa-88a0-6d8c0c7c01de" containerName="keystone-bootstrap" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.442211 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-7d978f-gdlmv" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.446503 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-config-data" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.446848 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-keystone-dockercfg-p9s8k" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.447284 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone-scripts" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.451114 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-7d978f-gdlmv"] Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.454059 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"keystone" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.537877 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/898c8554-82c6-4777-8869-15981e356a84-credential-keys\") pod \"keystone-7d978f-gdlmv\" (UID: \"898c8554-82c6-4777-8869-15981e356a84\") " pod="nova-kuttl-default/keystone-7d978f-gdlmv" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.538294 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/898c8554-82c6-4777-8869-15981e356a84-fernet-keys\") pod \"keystone-7d978f-gdlmv\" (UID: \"898c8554-82c6-4777-8869-15981e356a84\") " pod="nova-kuttl-default/keystone-7d978f-gdlmv" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.538482 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/898c8554-82c6-4777-8869-15981e356a84-config-data\") pod \"keystone-7d978f-gdlmv\" (UID: \"898c8554-82c6-4777-8869-15981e356a84\") " pod="nova-kuttl-default/keystone-7d978f-gdlmv" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.538739 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/898c8554-82c6-4777-8869-15981e356a84-combined-ca-bundle\") pod \"keystone-7d978f-gdlmv\" (UID: \"898c8554-82c6-4777-8869-15981e356a84\") " pod="nova-kuttl-default/keystone-7d978f-gdlmv" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.538971 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w87qr\" (UniqueName: \"kubernetes.io/projected/898c8554-82c6-4777-8869-15981e356a84-kube-api-access-w87qr\") pod \"keystone-7d978f-gdlmv\" (UID: \"898c8554-82c6-4777-8869-15981e356a84\") " pod="nova-kuttl-default/keystone-7d978f-gdlmv" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.539143 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/898c8554-82c6-4777-8869-15981e356a84-scripts\") pod \"keystone-7d978f-gdlmv\" (UID: \"898c8554-82c6-4777-8869-15981e356a84\") " pod="nova-kuttl-default/keystone-7d978f-gdlmv" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.640554 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/898c8554-82c6-4777-8869-15981e356a84-combined-ca-bundle\") pod \"keystone-7d978f-gdlmv\" (UID: \"898c8554-82c6-4777-8869-15981e356a84\") " pod="nova-kuttl-default/keystone-7d978f-gdlmv" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.640666 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w87qr\" (UniqueName: \"kubernetes.io/projected/898c8554-82c6-4777-8869-15981e356a84-kube-api-access-w87qr\") pod \"keystone-7d978f-gdlmv\" (UID: \"898c8554-82c6-4777-8869-15981e356a84\") " pod="nova-kuttl-default/keystone-7d978f-gdlmv" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.640728 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/898c8554-82c6-4777-8869-15981e356a84-scripts\") pod \"keystone-7d978f-gdlmv\" (UID: \"898c8554-82c6-4777-8869-15981e356a84\") " pod="nova-kuttl-default/keystone-7d978f-gdlmv" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.640826 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/898c8554-82c6-4777-8869-15981e356a84-credential-keys\") pod \"keystone-7d978f-gdlmv\" (UID: \"898c8554-82c6-4777-8869-15981e356a84\") " pod="nova-kuttl-default/keystone-7d978f-gdlmv" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.640897 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/898c8554-82c6-4777-8869-15981e356a84-fernet-keys\") pod \"keystone-7d978f-gdlmv\" (UID: \"898c8554-82c6-4777-8869-15981e356a84\") " pod="nova-kuttl-default/keystone-7d978f-gdlmv" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.640940 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/898c8554-82c6-4777-8869-15981e356a84-config-data\") pod \"keystone-7d978f-gdlmv\" (UID: \"898c8554-82c6-4777-8869-15981e356a84\") " pod="nova-kuttl-default/keystone-7d978f-gdlmv" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.645212 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/898c8554-82c6-4777-8869-15981e356a84-scripts\") pod \"keystone-7d978f-gdlmv\" (UID: \"898c8554-82c6-4777-8869-15981e356a84\") " pod="nova-kuttl-default/keystone-7d978f-gdlmv" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.646030 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/898c8554-82c6-4777-8869-15981e356a84-credential-keys\") pod \"keystone-7d978f-gdlmv\" (UID: \"898c8554-82c6-4777-8869-15981e356a84\") " pod="nova-kuttl-default/keystone-7d978f-gdlmv" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.646222 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/898c8554-82c6-4777-8869-15981e356a84-fernet-keys\") pod \"keystone-7d978f-gdlmv\" (UID: \"898c8554-82c6-4777-8869-15981e356a84\") " pod="nova-kuttl-default/keystone-7d978f-gdlmv" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.648104 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/898c8554-82c6-4777-8869-15981e356a84-combined-ca-bundle\") pod \"keystone-7d978f-gdlmv\" (UID: \"898c8554-82c6-4777-8869-15981e356a84\") " pod="nova-kuttl-default/keystone-7d978f-gdlmv" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.652437 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/898c8554-82c6-4777-8869-15981e356a84-config-data\") pod \"keystone-7d978f-gdlmv\" (UID: \"898c8554-82c6-4777-8869-15981e356a84\") " pod="nova-kuttl-default/keystone-7d978f-gdlmv" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.667556 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w87qr\" (UniqueName: \"kubernetes.io/projected/898c8554-82c6-4777-8869-15981e356a84-kube-api-access-w87qr\") pod \"keystone-7d978f-gdlmv\" (UID: \"898c8554-82c6-4777-8869-15981e356a84\") " pod="nova-kuttl-default/keystone-7d978f-gdlmv" Jan 23 14:24:19 crc kubenswrapper[4775]: I0123 14:24:19.769446 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/keystone-7d978f-gdlmv" Jan 23 14:24:20 crc kubenswrapper[4775]: I0123 14:24:20.273870 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/keystone-7d978f-gdlmv"] Jan 23 14:24:21 crc kubenswrapper[4775]: I0123 14:24:21.228061 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-7d978f-gdlmv" event={"ID":"898c8554-82c6-4777-8869-15981e356a84","Type":"ContainerStarted","Data":"250fdd231ab968eff2e27c95649eb833386f8def0f65064953417f57128c73ed"} Jan 23 14:24:21 crc kubenswrapper[4775]: I0123 14:24:21.228746 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/keystone-7d978f-gdlmv" event={"ID":"898c8554-82c6-4777-8869-15981e356a84","Type":"ContainerStarted","Data":"94dc95cb3a14628d6cfd18a608edddc1e814760956d313fb91c94f21cda39255"} Jan 23 14:24:21 crc kubenswrapper[4775]: I0123 14:24:21.230114 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/keystone-7d978f-gdlmv" Jan 23 14:24:21 crc kubenswrapper[4775]: I0123 14:24:21.268516 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/keystone-7d978f-gdlmv" podStartSLOduration=2.268493288 podStartE2EDuration="2.268493288s" podCreationTimestamp="2026-01-23 14:24:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:24:21.261525931 +0000 UTC m=+1208.256354681" watchObservedRunningTime="2026-01-23 14:24:21.268493288 +0000 UTC m=+1208.263322058" Jan 23 14:24:45 crc kubenswrapper[4775]: I0123 14:24:45.699155 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/placement-7787b67bb8-psq7t" Jan 23 14:24:45 crc kubenswrapper[4775]: I0123 14:24:45.707735 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/placement-7787b67bb8-psq7t" Jan 23 14:24:51 crc kubenswrapper[4775]: I0123 14:24:51.342867 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/keystone-7d978f-gdlmv" Jan 23 14:24:53 crc kubenswrapper[4775]: I0123 14:24:53.459331 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/openstackclient"] Jan 23 14:24:53 crc kubenswrapper[4775]: I0123 14:24:53.462066 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstackclient" Jan 23 14:24:53 crc kubenswrapper[4775]: I0123 14:24:53.467378 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"openstackclient-openstackclient-dockercfg-tkffp" Jan 23 14:24:53 crc kubenswrapper[4775]: I0123 14:24:53.468118 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"nova-kuttl-default"/"openstack-config" Jan 23 14:24:53 crc kubenswrapper[4775]: I0123 14:24:53.468350 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"openstack-config-secret" Jan 23 14:24:53 crc kubenswrapper[4775]: I0123 14:24:53.469962 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstackclient"] Jan 23 14:24:53 crc kubenswrapper[4775]: I0123 14:24:53.657265 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76733f2d-491c-45dd-bcf5-1a4423019717-combined-ca-bundle\") pod \"openstackclient\" (UID: \"76733f2d-491c-45dd-bcf5-1a4423019717\") " pod="nova-kuttl-default/openstackclient" Jan 23 14:24:53 crc kubenswrapper[4775]: I0123 14:24:53.657403 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/76733f2d-491c-45dd-bcf5-1a4423019717-openstack-config\") pod \"openstackclient\" (UID: \"76733f2d-491c-45dd-bcf5-1a4423019717\") " pod="nova-kuttl-default/openstackclient" Jan 23 14:24:53 crc kubenswrapper[4775]: I0123 14:24:53.657622 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hhxk\" (UniqueName: \"kubernetes.io/projected/76733f2d-491c-45dd-bcf5-1a4423019717-kube-api-access-6hhxk\") pod \"openstackclient\" (UID: \"76733f2d-491c-45dd-bcf5-1a4423019717\") " pod="nova-kuttl-default/openstackclient" Jan 23 14:24:53 crc kubenswrapper[4775]: I0123 14:24:53.657679 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/76733f2d-491c-45dd-bcf5-1a4423019717-openstack-config-secret\") pod \"openstackclient\" (UID: \"76733f2d-491c-45dd-bcf5-1a4423019717\") " pod="nova-kuttl-default/openstackclient" Jan 23 14:24:53 crc kubenswrapper[4775]: I0123 14:24:53.759227 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/76733f2d-491c-45dd-bcf5-1a4423019717-openstack-config\") pod \"openstackclient\" (UID: \"76733f2d-491c-45dd-bcf5-1a4423019717\") " pod="nova-kuttl-default/openstackclient" Jan 23 14:24:53 crc kubenswrapper[4775]: I0123 14:24:53.759340 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hhxk\" (UniqueName: \"kubernetes.io/projected/76733f2d-491c-45dd-bcf5-1a4423019717-kube-api-access-6hhxk\") pod \"openstackclient\" (UID: \"76733f2d-491c-45dd-bcf5-1a4423019717\") " pod="nova-kuttl-default/openstackclient" Jan 23 14:24:53 crc kubenswrapper[4775]: I0123 14:24:53.759378 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/76733f2d-491c-45dd-bcf5-1a4423019717-openstack-config-secret\") pod \"openstackclient\" (UID: \"76733f2d-491c-45dd-bcf5-1a4423019717\") " pod="nova-kuttl-default/openstackclient" Jan 23 14:24:53 crc kubenswrapper[4775]: I0123 14:24:53.759438 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76733f2d-491c-45dd-bcf5-1a4423019717-combined-ca-bundle\") pod \"openstackclient\" (UID: \"76733f2d-491c-45dd-bcf5-1a4423019717\") " pod="nova-kuttl-default/openstackclient" Jan 23 14:24:53 crc kubenswrapper[4775]: I0123 14:24:53.761342 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/76733f2d-491c-45dd-bcf5-1a4423019717-openstack-config\") pod \"openstackclient\" (UID: \"76733f2d-491c-45dd-bcf5-1a4423019717\") " pod="nova-kuttl-default/openstackclient" Jan 23 14:24:53 crc kubenswrapper[4775]: I0123 14:24:53.769372 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/76733f2d-491c-45dd-bcf5-1a4423019717-openstack-config-secret\") pod \"openstackclient\" (UID: \"76733f2d-491c-45dd-bcf5-1a4423019717\") " pod="nova-kuttl-default/openstackclient" Jan 23 14:24:53 crc kubenswrapper[4775]: I0123 14:24:53.769394 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76733f2d-491c-45dd-bcf5-1a4423019717-combined-ca-bundle\") pod \"openstackclient\" (UID: \"76733f2d-491c-45dd-bcf5-1a4423019717\") " pod="nova-kuttl-default/openstackclient" Jan 23 14:24:53 crc kubenswrapper[4775]: I0123 14:24:53.789501 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hhxk\" (UniqueName: \"kubernetes.io/projected/76733f2d-491c-45dd-bcf5-1a4423019717-kube-api-access-6hhxk\") pod \"openstackclient\" (UID: \"76733f2d-491c-45dd-bcf5-1a4423019717\") " pod="nova-kuttl-default/openstackclient" Jan 23 14:24:53 crc kubenswrapper[4775]: I0123 14:24:53.802579 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/openstackclient" Jan 23 14:24:54 crc kubenswrapper[4775]: I0123 14:24:54.078496 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/openstackclient"] Jan 23 14:24:54 crc kubenswrapper[4775]: I0123 14:24:54.572484 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstackclient" event={"ID":"76733f2d-491c-45dd-bcf5-1a4423019717","Type":"ContainerStarted","Data":"f85a7383fa7a8273d0fb0dbacfd2e742a75b8cbb16bb2e5e6028fd2297c0d9af"} Jan 23 14:25:02 crc kubenswrapper[4775]: I0123 14:25:02.639117 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/openstackclient" event={"ID":"76733f2d-491c-45dd-bcf5-1a4423019717","Type":"ContainerStarted","Data":"489441ff2ea4269ef000c88513b580c4205fc44985bfbde6f23c1ce7ded2b2ea"} Jan 23 14:25:02 crc kubenswrapper[4775]: I0123 14:25:02.672747 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/openstackclient" podStartSLOduration=1.653875485 podStartE2EDuration="9.672719149s" podCreationTimestamp="2026-01-23 14:24:53 +0000 UTC" firstStartedPulling="2026-01-23 14:24:54.082849957 +0000 UTC m=+1241.077678707" lastFinishedPulling="2026-01-23 14:25:02.101693591 +0000 UTC m=+1249.096522371" observedRunningTime="2026-01-23 14:25:02.662064298 +0000 UTC m=+1249.656893078" watchObservedRunningTime="2026-01-23 14:25:02.672719149 +0000 UTC m=+1249.667547929" Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.061801 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/nova-operator-controller-manager-d9495b985-k98mk"] Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.062939 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/nova-operator-controller-manager-d9495b985-k98mk" podUID="9bad88d6-5ca9-4176-904d-72b793e1361e" containerName="manager" containerID="cri-o://e73b6eeb014674539aea8fd7195079debeadbaa135e4e4e1baacaed853f9a774" gracePeriod=10 Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.107595 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-init-86f7b68b5c-stl6w"] Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.107847 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-controller-init-86f7b68b5c-stl6w" podUID="355da547-d965-4754-8730-b9c8a20fd930" containerName="operator" containerID="cri-o://29bef9650740f55bafd48157808e3591f52eafd13be1ee85e76f5102a8d9c94d" gracePeriod=10 Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.403359 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-index-xx8wj"] Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.404486 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-xx8wj" Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.445298 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-index-xx8wj"] Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.518049 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-index-dockercfg-2sllt" Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.535901 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2fd2\" (UniqueName: \"kubernetes.io/projected/552805f7-e5f6-447b-a319-a3e3d62608f3-kube-api-access-f2fd2\") pod \"nova-operator-index-xx8wj\" (UID: \"552805f7-e5f6-447b-a319-a3e3d62608f3\") " pod="openstack-operators/nova-operator-index-xx8wj" Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.589177 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-d9495b985-k98mk" Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.637615 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2fd2\" (UniqueName: \"kubernetes.io/projected/552805f7-e5f6-447b-a319-a3e3d62608f3-kube-api-access-f2fd2\") pod \"nova-operator-index-xx8wj\" (UID: \"552805f7-e5f6-447b-a319-a3e3d62608f3\") " pod="openstack-operators/nova-operator-index-xx8wj" Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.662429 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-86f7b68b5c-stl6w" Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.692856 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2fd2\" (UniqueName: \"kubernetes.io/projected/552805f7-e5f6-447b-a319-a3e3d62608f3-kube-api-access-f2fd2\") pod \"nova-operator-index-xx8wj\" (UID: \"552805f7-e5f6-447b-a319-a3e3d62608f3\") " pod="openstack-operators/nova-operator-index-xx8wj" Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.738814 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnvz4\" (UniqueName: \"kubernetes.io/projected/355da547-d965-4754-8730-b9c8a20fd930-kube-api-access-qnvz4\") pod \"355da547-d965-4754-8730-b9c8a20fd930\" (UID: \"355da547-d965-4754-8730-b9c8a20fd930\") " Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.738859 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gskfg\" (UniqueName: \"kubernetes.io/projected/9bad88d6-5ca9-4176-904d-72b793e1361e-kube-api-access-gskfg\") pod \"9bad88d6-5ca9-4176-904d-72b793e1361e\" (UID: \"9bad88d6-5ca9-4176-904d-72b793e1361e\") " Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.742141 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/355da547-d965-4754-8730-b9c8a20fd930-kube-api-access-qnvz4" (OuterVolumeSpecName: "kube-api-access-qnvz4") pod "355da547-d965-4754-8730-b9c8a20fd930" (UID: "355da547-d965-4754-8730-b9c8a20fd930"). InnerVolumeSpecName "kube-api-access-qnvz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.742204 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bad88d6-5ca9-4176-904d-72b793e1361e-kube-api-access-gskfg" (OuterVolumeSpecName: "kube-api-access-gskfg") pod "9bad88d6-5ca9-4176-904d-72b793e1361e" (UID: "9bad88d6-5ca9-4176-904d-72b793e1361e"). InnerVolumeSpecName "kube-api-access-gskfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.786621 4775 generic.go:334] "Generic (PLEG): container finished" podID="9bad88d6-5ca9-4176-904d-72b793e1361e" containerID="e73b6eeb014674539aea8fd7195079debeadbaa135e4e4e1baacaed853f9a774" exitCode=0 Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.786683 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-d9495b985-k98mk" event={"ID":"9bad88d6-5ca9-4176-904d-72b793e1361e","Type":"ContainerDied","Data":"e73b6eeb014674539aea8fd7195079debeadbaa135e4e4e1baacaed853f9a774"} Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.786707 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-d9495b985-k98mk" event={"ID":"9bad88d6-5ca9-4176-904d-72b793e1361e","Type":"ContainerDied","Data":"3b31a7012ea48421023dcf9b284625ce3e8507aa2773ce103b29a5ca80ded146"} Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.786723 4775 scope.go:117] "RemoveContainer" containerID="e73b6eeb014674539aea8fd7195079debeadbaa135e4e4e1baacaed853f9a774" Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.786860 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-d9495b985-k98mk" Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.793606 4775 generic.go:334] "Generic (PLEG): container finished" podID="355da547-d965-4754-8730-b9c8a20fd930" containerID="29bef9650740f55bafd48157808e3591f52eafd13be1ee85e76f5102a8d9c94d" exitCode=0 Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.793641 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-86f7b68b5c-stl6w" event={"ID":"355da547-d965-4754-8730-b9c8a20fd930","Type":"ContainerDied","Data":"29bef9650740f55bafd48157808e3591f52eafd13be1ee85e76f5102a8d9c94d"} Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.793726 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-86f7b68b5c-stl6w" event={"ID":"355da547-d965-4754-8730-b9c8a20fd930","Type":"ContainerDied","Data":"9226d2ede7beb9208ad931c1d54e8ae0eea8cc9501e5c82efcf4ccfa1586382e"} Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.793815 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-86f7b68b5c-stl6w" Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.816783 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/nova-operator-controller-manager-d9495b985-k98mk"] Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.824864 4775 scope.go:117] "RemoveContainer" containerID="e73b6eeb014674539aea8fd7195079debeadbaa135e4e4e1baacaed853f9a774" Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.825135 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/nova-operator-controller-manager-d9495b985-k98mk"] Jan 23 14:25:16 crc kubenswrapper[4775]: E0123 14:25:16.825334 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e73b6eeb014674539aea8fd7195079debeadbaa135e4e4e1baacaed853f9a774\": container with ID starting with e73b6eeb014674539aea8fd7195079debeadbaa135e4e4e1baacaed853f9a774 not found: ID does not exist" containerID="e73b6eeb014674539aea8fd7195079debeadbaa135e4e4e1baacaed853f9a774" Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.825368 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e73b6eeb014674539aea8fd7195079debeadbaa135e4e4e1baacaed853f9a774"} err="failed to get container status \"e73b6eeb014674539aea8fd7195079debeadbaa135e4e4e1baacaed853f9a774\": rpc error: code = NotFound desc = could not find container \"e73b6eeb014674539aea8fd7195079debeadbaa135e4e4e1baacaed853f9a774\": container with ID starting with e73b6eeb014674539aea8fd7195079debeadbaa135e4e4e1baacaed853f9a774 not found: ID does not exist" Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.825395 4775 scope.go:117] "RemoveContainer" containerID="29bef9650740f55bafd48157808e3591f52eafd13be1ee85e76f5102a8d9c94d" Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.830376 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-controller-init-86f7b68b5c-stl6w"] Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.835021 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-controller-init-86f7b68b5c-stl6w"] Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.840355 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnvz4\" (UniqueName: \"kubernetes.io/projected/355da547-d965-4754-8730-b9c8a20fd930-kube-api-access-qnvz4\") on node \"crc\" DevicePath \"\"" Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.840394 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gskfg\" (UniqueName: \"kubernetes.io/projected/9bad88d6-5ca9-4176-904d-72b793e1361e-kube-api-access-gskfg\") on node \"crc\" DevicePath \"\"" Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.843753 4775 scope.go:117] "RemoveContainer" containerID="29bef9650740f55bafd48157808e3591f52eafd13be1ee85e76f5102a8d9c94d" Jan 23 14:25:16 crc kubenswrapper[4775]: E0123 14:25:16.844175 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29bef9650740f55bafd48157808e3591f52eafd13be1ee85e76f5102a8d9c94d\": container with ID starting with 29bef9650740f55bafd48157808e3591f52eafd13be1ee85e76f5102a8d9c94d not found: ID does not exist" containerID="29bef9650740f55bafd48157808e3591f52eafd13be1ee85e76f5102a8d9c94d" Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.844202 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29bef9650740f55bafd48157808e3591f52eafd13be1ee85e76f5102a8d9c94d"} err="failed to get container status \"29bef9650740f55bafd48157808e3591f52eafd13be1ee85e76f5102a8d9c94d\": rpc error: code = NotFound desc = could not find container \"29bef9650740f55bafd48157808e3591f52eafd13be1ee85e76f5102a8d9c94d\": container with ID starting with 29bef9650740f55bafd48157808e3591f52eafd13be1ee85e76f5102a8d9c94d not found: ID does not exist" Jan 23 14:25:16 crc kubenswrapper[4775]: I0123 14:25:16.845777 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-xx8wj" Jan 23 14:25:17 crc kubenswrapper[4775]: I0123 14:25:17.280368 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-index-xx8wj"] Jan 23 14:25:17 crc kubenswrapper[4775]: W0123 14:25:17.288867 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod552805f7_e5f6_447b_a319_a3e3d62608f3.slice/crio-31cb15ec06acc02d60feb50c3051c3474687b3e8e841a6f7053b73627327039b WatchSource:0}: Error finding container 31cb15ec06acc02d60feb50c3051c3474687b3e8e841a6f7053b73627327039b: Status 404 returned error can't find the container with id 31cb15ec06acc02d60feb50c3051c3474687b3e8e841a6f7053b73627327039b Jan 23 14:25:17 crc kubenswrapper[4775]: I0123 14:25:17.725135 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="355da547-d965-4754-8730-b9c8a20fd930" path="/var/lib/kubelet/pods/355da547-d965-4754-8730-b9c8a20fd930/volumes" Jan 23 14:25:17 crc kubenswrapper[4775]: I0123 14:25:17.725897 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bad88d6-5ca9-4176-904d-72b793e1361e" path="/var/lib/kubelet/pods/9bad88d6-5ca9-4176-904d-72b793e1361e/volumes" Jan 23 14:25:17 crc kubenswrapper[4775]: I0123 14:25:17.811900 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-xx8wj" event={"ID":"552805f7-e5f6-447b-a319-a3e3d62608f3","Type":"ContainerStarted","Data":"6b90c58f739ea4e1c7fc4223da3095218fda2c74a9cf6d304e75fd96ddcf88d0"} Jan 23 14:25:17 crc kubenswrapper[4775]: I0123 14:25:17.811952 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-xx8wj" event={"ID":"552805f7-e5f6-447b-a319-a3e3d62608f3","Type":"ContainerStarted","Data":"31cb15ec06acc02d60feb50c3051c3474687b3e8e841a6f7053b73627327039b"} Jan 23 14:25:17 crc kubenswrapper[4775]: I0123 14:25:17.843702 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-index-xx8wj" podStartSLOduration=1.685060688 podStartE2EDuration="1.843678626s" podCreationTimestamp="2026-01-23 14:25:16 +0000 UTC" firstStartedPulling="2026-01-23 14:25:17.290689517 +0000 UTC m=+1264.285518257" lastFinishedPulling="2026-01-23 14:25:17.449307415 +0000 UTC m=+1264.444136195" observedRunningTime="2026-01-23 14:25:17.8381586 +0000 UTC m=+1264.832987340" watchObservedRunningTime="2026-01-23 14:25:17.843678626 +0000 UTC m=+1264.838507376" Jan 23 14:25:19 crc kubenswrapper[4775]: I0123 14:25:19.142191 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/nova-operator-index-xx8wj"] Jan 23 14:25:19 crc kubenswrapper[4775]: I0123 14:25:19.564220 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-index-x4gqk"] Jan 23 14:25:19 crc kubenswrapper[4775]: E0123 14:25:19.564868 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="355da547-d965-4754-8730-b9c8a20fd930" containerName="operator" Jan 23 14:25:19 crc kubenswrapper[4775]: I0123 14:25:19.564899 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="355da547-d965-4754-8730-b9c8a20fd930" containerName="operator" Jan 23 14:25:19 crc kubenswrapper[4775]: E0123 14:25:19.564928 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bad88d6-5ca9-4176-904d-72b793e1361e" containerName="manager" Jan 23 14:25:19 crc kubenswrapper[4775]: I0123 14:25:19.564967 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bad88d6-5ca9-4176-904d-72b793e1361e" containerName="manager" Jan 23 14:25:19 crc kubenswrapper[4775]: I0123 14:25:19.565422 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="355da547-d965-4754-8730-b9c8a20fd930" containerName="operator" Jan 23 14:25:19 crc kubenswrapper[4775]: I0123 14:25:19.565460 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bad88d6-5ca9-4176-904d-72b793e1361e" containerName="manager" Jan 23 14:25:19 crc kubenswrapper[4775]: I0123 14:25:19.566520 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-x4gqk" Jan 23 14:25:19 crc kubenswrapper[4775]: I0123 14:25:19.596628 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-index-x4gqk"] Jan 23 14:25:19 crc kubenswrapper[4775]: I0123 14:25:19.685020 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdkvs\" (UniqueName: \"kubernetes.io/projected/78f375c8-5d62-4cbb-b348-8205d476d603-kube-api-access-xdkvs\") pod \"nova-operator-index-x4gqk\" (UID: \"78f375c8-5d62-4cbb-b348-8205d476d603\") " pod="openstack-operators/nova-operator-index-x4gqk" Jan 23 14:25:19 crc kubenswrapper[4775]: I0123 14:25:19.787602 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdkvs\" (UniqueName: \"kubernetes.io/projected/78f375c8-5d62-4cbb-b348-8205d476d603-kube-api-access-xdkvs\") pod \"nova-operator-index-x4gqk\" (UID: \"78f375c8-5d62-4cbb-b348-8205d476d603\") " pod="openstack-operators/nova-operator-index-x4gqk" Jan 23 14:25:19 crc kubenswrapper[4775]: I0123 14:25:19.815478 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdkvs\" (UniqueName: \"kubernetes.io/projected/78f375c8-5d62-4cbb-b348-8205d476d603-kube-api-access-xdkvs\") pod \"nova-operator-index-x4gqk\" (UID: \"78f375c8-5d62-4cbb-b348-8205d476d603\") " pod="openstack-operators/nova-operator-index-x4gqk" Jan 23 14:25:19 crc kubenswrapper[4775]: I0123 14:25:19.835357 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/nova-operator-index-xx8wj" podUID="552805f7-e5f6-447b-a319-a3e3d62608f3" containerName="registry-server" containerID="cri-o://6b90c58f739ea4e1c7fc4223da3095218fda2c74a9cf6d304e75fd96ddcf88d0" gracePeriod=2 Jan 23 14:25:19 crc kubenswrapper[4775]: I0123 14:25:19.892736 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-x4gqk" Jan 23 14:25:20 crc kubenswrapper[4775]: I0123 14:25:20.244079 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-xx8wj" Jan 23 14:25:20 crc kubenswrapper[4775]: I0123 14:25:20.396570 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2fd2\" (UniqueName: \"kubernetes.io/projected/552805f7-e5f6-447b-a319-a3e3d62608f3-kube-api-access-f2fd2\") pod \"552805f7-e5f6-447b-a319-a3e3d62608f3\" (UID: \"552805f7-e5f6-447b-a319-a3e3d62608f3\") " Jan 23 14:25:20 crc kubenswrapper[4775]: I0123 14:25:20.405027 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/552805f7-e5f6-447b-a319-a3e3d62608f3-kube-api-access-f2fd2" (OuterVolumeSpecName: "kube-api-access-f2fd2") pod "552805f7-e5f6-447b-a319-a3e3d62608f3" (UID: "552805f7-e5f6-447b-a319-a3e3d62608f3"). InnerVolumeSpecName "kube-api-access-f2fd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:25:20 crc kubenswrapper[4775]: I0123 14:25:20.441070 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-index-x4gqk"] Jan 23 14:25:20 crc kubenswrapper[4775]: W0123 14:25:20.447628 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78f375c8_5d62_4cbb_b348_8205d476d603.slice/crio-968c26b22bd908cb1e261c87b6923181069238713e5e8019feeaa47f8ae7988f WatchSource:0}: Error finding container 968c26b22bd908cb1e261c87b6923181069238713e5e8019feeaa47f8ae7988f: Status 404 returned error can't find the container with id 968c26b22bd908cb1e261c87b6923181069238713e5e8019feeaa47f8ae7988f Jan 23 14:25:20 crc kubenswrapper[4775]: I0123 14:25:20.499234 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2fd2\" (UniqueName: \"kubernetes.io/projected/552805f7-e5f6-447b-a319-a3e3d62608f3-kube-api-access-f2fd2\") on node \"crc\" DevicePath \"\"" Jan 23 14:25:20 crc kubenswrapper[4775]: I0123 14:25:20.846120 4775 generic.go:334] "Generic (PLEG): container finished" podID="552805f7-e5f6-447b-a319-a3e3d62608f3" containerID="6b90c58f739ea4e1c7fc4223da3095218fda2c74a9cf6d304e75fd96ddcf88d0" exitCode=0 Jan 23 14:25:20 crc kubenswrapper[4775]: I0123 14:25:20.846242 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-index-xx8wj" Jan 23 14:25:20 crc kubenswrapper[4775]: I0123 14:25:20.846281 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-xx8wj" event={"ID":"552805f7-e5f6-447b-a319-a3e3d62608f3","Type":"ContainerDied","Data":"6b90c58f739ea4e1c7fc4223da3095218fda2c74a9cf6d304e75fd96ddcf88d0"} Jan 23 14:25:20 crc kubenswrapper[4775]: I0123 14:25:20.847905 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-xx8wj" event={"ID":"552805f7-e5f6-447b-a319-a3e3d62608f3","Type":"ContainerDied","Data":"31cb15ec06acc02d60feb50c3051c3474687b3e8e841a6f7053b73627327039b"} Jan 23 14:25:20 crc kubenswrapper[4775]: I0123 14:25:20.847940 4775 scope.go:117] "RemoveContainer" containerID="6b90c58f739ea4e1c7fc4223da3095218fda2c74a9cf6d304e75fd96ddcf88d0" Jan 23 14:25:20 crc kubenswrapper[4775]: I0123 14:25:20.850013 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-x4gqk" event={"ID":"78f375c8-5d62-4cbb-b348-8205d476d603","Type":"ContainerStarted","Data":"cbccf2dbb603d4f8c6c8b3929f8ded1dfcb1ccd264450f78faa8ec3434116628"} Jan 23 14:25:20 crc kubenswrapper[4775]: I0123 14:25:20.850039 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-index-x4gqk" event={"ID":"78f375c8-5d62-4cbb-b348-8205d476d603","Type":"ContainerStarted","Data":"968c26b22bd908cb1e261c87b6923181069238713e5e8019feeaa47f8ae7988f"} Jan 23 14:25:20 crc kubenswrapper[4775]: I0123 14:25:20.888751 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-index-x4gqk" podStartSLOduration=1.8153065819999998 podStartE2EDuration="1.888722714s" podCreationTimestamp="2026-01-23 14:25:19 +0000 UTC" firstStartedPulling="2026-01-23 14:25:20.45212413 +0000 UTC m=+1267.446952910" lastFinishedPulling="2026-01-23 14:25:20.525540262 +0000 UTC m=+1267.520369042" observedRunningTime="2026-01-23 14:25:20.883509406 +0000 UTC m=+1267.878338166" watchObservedRunningTime="2026-01-23 14:25:20.888722714 +0000 UTC m=+1267.883551484" Jan 23 14:25:20 crc kubenswrapper[4775]: I0123 14:25:20.893676 4775 scope.go:117] "RemoveContainer" containerID="6b90c58f739ea4e1c7fc4223da3095218fda2c74a9cf6d304e75fd96ddcf88d0" Jan 23 14:25:20 crc kubenswrapper[4775]: E0123 14:25:20.894109 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b90c58f739ea4e1c7fc4223da3095218fda2c74a9cf6d304e75fd96ddcf88d0\": container with ID starting with 6b90c58f739ea4e1c7fc4223da3095218fda2c74a9cf6d304e75fd96ddcf88d0 not found: ID does not exist" containerID="6b90c58f739ea4e1c7fc4223da3095218fda2c74a9cf6d304e75fd96ddcf88d0" Jan 23 14:25:20 crc kubenswrapper[4775]: I0123 14:25:20.894148 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b90c58f739ea4e1c7fc4223da3095218fda2c74a9cf6d304e75fd96ddcf88d0"} err="failed to get container status \"6b90c58f739ea4e1c7fc4223da3095218fda2c74a9cf6d304e75fd96ddcf88d0\": rpc error: code = NotFound desc = could not find container \"6b90c58f739ea4e1c7fc4223da3095218fda2c74a9cf6d304e75fd96ddcf88d0\": container with ID starting with 6b90c58f739ea4e1c7fc4223da3095218fda2c74a9cf6d304e75fd96ddcf88d0 not found: ID does not exist" Jan 23 14:25:20 crc kubenswrapper[4775]: I0123 14:25:20.908107 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/nova-operator-index-xx8wj"] Jan 23 14:25:20 crc kubenswrapper[4775]: I0123 14:25:20.920139 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/nova-operator-index-xx8wj"] Jan 23 14:25:21 crc kubenswrapper[4775]: I0123 14:25:21.721041 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="552805f7-e5f6-447b-a319-a3e3d62608f3" path="/var/lib/kubelet/pods/552805f7-e5f6-447b-a319-a3e3d62608f3/volumes" Jan 23 14:25:23 crc kubenswrapper[4775]: I0123 14:25:23.219462 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:25:23 crc kubenswrapper[4775]: I0123 14:25:23.219929 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:25:29 crc kubenswrapper[4775]: I0123 14:25:29.893347 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-index-x4gqk" Jan 23 14:25:29 crc kubenswrapper[4775]: I0123 14:25:29.894031 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/nova-operator-index-x4gqk" Jan 23 14:25:29 crc kubenswrapper[4775]: I0123 14:25:29.928307 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/nova-operator-index-x4gqk" Jan 23 14:25:29 crc kubenswrapper[4775]: I0123 14:25:29.991275 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-index-x4gqk" Jan 23 14:25:38 crc kubenswrapper[4775]: I0123 14:25:38.411107 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc"] Jan 23 14:25:38 crc kubenswrapper[4775]: E0123 14:25:38.411972 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="552805f7-e5f6-447b-a319-a3e3d62608f3" containerName="registry-server" Jan 23 14:25:38 crc kubenswrapper[4775]: I0123 14:25:38.411988 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="552805f7-e5f6-447b-a319-a3e3d62608f3" containerName="registry-server" Jan 23 14:25:38 crc kubenswrapper[4775]: I0123 14:25:38.412183 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="552805f7-e5f6-447b-a319-a3e3d62608f3" containerName="registry-server" Jan 23 14:25:38 crc kubenswrapper[4775]: I0123 14:25:38.413503 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc" Jan 23 14:25:38 crc kubenswrapper[4775]: I0123 14:25:38.421614 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-nklzs" Jan 23 14:25:38 crc kubenswrapper[4775]: I0123 14:25:38.450629 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc"] Jan 23 14:25:38 crc kubenswrapper[4775]: I0123 14:25:38.512073 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a7025f67-434a-4dba-9b3a-e3b809f5c614-bundle\") pod \"5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc\" (UID: \"a7025f67-434a-4dba-9b3a-e3b809f5c614\") " pod="openstack-operators/5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc" Jan 23 14:25:38 crc kubenswrapper[4775]: I0123 14:25:38.512401 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a7025f67-434a-4dba-9b3a-e3b809f5c614-util\") pod \"5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc\" (UID: \"a7025f67-434a-4dba-9b3a-e3b809f5c614\") " pod="openstack-operators/5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc" Jan 23 14:25:38 crc kubenswrapper[4775]: I0123 14:25:38.512658 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfxq4\" (UniqueName: \"kubernetes.io/projected/a7025f67-434a-4dba-9b3a-e3b809f5c614-kube-api-access-zfxq4\") pod \"5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc\" (UID: \"a7025f67-434a-4dba-9b3a-e3b809f5c614\") " pod="openstack-operators/5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc" Jan 23 14:25:38 crc kubenswrapper[4775]: I0123 14:25:38.614188 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfxq4\" (UniqueName: \"kubernetes.io/projected/a7025f67-434a-4dba-9b3a-e3b809f5c614-kube-api-access-zfxq4\") pod \"5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc\" (UID: \"a7025f67-434a-4dba-9b3a-e3b809f5c614\") " pod="openstack-operators/5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc" Jan 23 14:25:38 crc kubenswrapper[4775]: I0123 14:25:38.614281 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a7025f67-434a-4dba-9b3a-e3b809f5c614-bundle\") pod \"5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc\" (UID: \"a7025f67-434a-4dba-9b3a-e3b809f5c614\") " pod="openstack-operators/5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc" Jan 23 14:25:38 crc kubenswrapper[4775]: I0123 14:25:38.614303 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a7025f67-434a-4dba-9b3a-e3b809f5c614-util\") pod \"5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc\" (UID: \"a7025f67-434a-4dba-9b3a-e3b809f5c614\") " pod="openstack-operators/5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc" Jan 23 14:25:38 crc kubenswrapper[4775]: I0123 14:25:38.614814 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a7025f67-434a-4dba-9b3a-e3b809f5c614-util\") pod \"5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc\" (UID: \"a7025f67-434a-4dba-9b3a-e3b809f5c614\") " pod="openstack-operators/5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc" Jan 23 14:25:38 crc kubenswrapper[4775]: I0123 14:25:38.615171 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a7025f67-434a-4dba-9b3a-e3b809f5c614-bundle\") pod \"5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc\" (UID: \"a7025f67-434a-4dba-9b3a-e3b809f5c614\") " pod="openstack-operators/5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc" Jan 23 14:25:38 crc kubenswrapper[4775]: I0123 14:25:38.637107 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfxq4\" (UniqueName: \"kubernetes.io/projected/a7025f67-434a-4dba-9b3a-e3b809f5c614-kube-api-access-zfxq4\") pod \"5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc\" (UID: \"a7025f67-434a-4dba-9b3a-e3b809f5c614\") " pod="openstack-operators/5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc" Jan 23 14:25:38 crc kubenswrapper[4775]: I0123 14:25:38.748092 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc" Jan 23 14:25:39 crc kubenswrapper[4775]: I0123 14:25:39.217446 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc"] Jan 23 14:25:39 crc kubenswrapper[4775]: W0123 14:25:39.227047 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7025f67_434a_4dba_9b3a_e3b809f5c614.slice/crio-c132cdd71e18904ddaab66994c62997ef1496ddd868c2b3c599059668d98a2dd WatchSource:0}: Error finding container c132cdd71e18904ddaab66994c62997ef1496ddd868c2b3c599059668d98a2dd: Status 404 returned error can't find the container with id c132cdd71e18904ddaab66994c62997ef1496ddd868c2b3c599059668d98a2dd Jan 23 14:25:40 crc kubenswrapper[4775]: I0123 14:25:40.048687 4775 generic.go:334] "Generic (PLEG): container finished" podID="a7025f67-434a-4dba-9b3a-e3b809f5c614" containerID="739e20da7cc2d594cd007d49d1cb4d46d86d97e2f87bb0cc8db7e7ba0f7c49e2" exitCode=0 Jan 23 14:25:40 crc kubenswrapper[4775]: I0123 14:25:40.048758 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc" event={"ID":"a7025f67-434a-4dba-9b3a-e3b809f5c614","Type":"ContainerDied","Data":"739e20da7cc2d594cd007d49d1cb4d46d86d97e2f87bb0cc8db7e7ba0f7c49e2"} Jan 23 14:25:40 crc kubenswrapper[4775]: I0123 14:25:40.049184 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc" event={"ID":"a7025f67-434a-4dba-9b3a-e3b809f5c614","Type":"ContainerStarted","Data":"c132cdd71e18904ddaab66994c62997ef1496ddd868c2b3c599059668d98a2dd"} Jan 23 14:25:41 crc kubenswrapper[4775]: I0123 14:25:41.061209 4775 generic.go:334] "Generic (PLEG): container finished" podID="a7025f67-434a-4dba-9b3a-e3b809f5c614" containerID="81145fa9cc8e22af9d5f3739f292c51f9e7e1303411fc02184f15488fcaee2bc" exitCode=0 Jan 23 14:25:41 crc kubenswrapper[4775]: I0123 14:25:41.061371 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc" event={"ID":"a7025f67-434a-4dba-9b3a-e3b809f5c614","Type":"ContainerDied","Data":"81145fa9cc8e22af9d5f3739f292c51f9e7e1303411fc02184f15488fcaee2bc"} Jan 23 14:25:42 crc kubenswrapper[4775]: I0123 14:25:42.076290 4775 generic.go:334] "Generic (PLEG): container finished" podID="a7025f67-434a-4dba-9b3a-e3b809f5c614" containerID="e1c8ccb7e0efad01a74a7bcb2e81ffe8f5651380b879f58fb9e879f6851a180a" exitCode=0 Jan 23 14:25:42 crc kubenswrapper[4775]: I0123 14:25:42.076408 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc" event={"ID":"a7025f67-434a-4dba-9b3a-e3b809f5c614","Type":"ContainerDied","Data":"e1c8ccb7e0efad01a74a7bcb2e81ffe8f5651380b879f58fb9e879f6851a180a"} Jan 23 14:25:43 crc kubenswrapper[4775]: I0123 14:25:43.514329 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc" Jan 23 14:25:43 crc kubenswrapper[4775]: I0123 14:25:43.557866 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfxq4\" (UniqueName: \"kubernetes.io/projected/a7025f67-434a-4dba-9b3a-e3b809f5c614-kube-api-access-zfxq4\") pod \"a7025f67-434a-4dba-9b3a-e3b809f5c614\" (UID: \"a7025f67-434a-4dba-9b3a-e3b809f5c614\") " Jan 23 14:25:43 crc kubenswrapper[4775]: I0123 14:25:43.558280 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a7025f67-434a-4dba-9b3a-e3b809f5c614-bundle\") pod \"a7025f67-434a-4dba-9b3a-e3b809f5c614\" (UID: \"a7025f67-434a-4dba-9b3a-e3b809f5c614\") " Jan 23 14:25:43 crc kubenswrapper[4775]: I0123 14:25:43.558342 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a7025f67-434a-4dba-9b3a-e3b809f5c614-util\") pod \"a7025f67-434a-4dba-9b3a-e3b809f5c614\" (UID: \"a7025f67-434a-4dba-9b3a-e3b809f5c614\") " Jan 23 14:25:43 crc kubenswrapper[4775]: I0123 14:25:43.561096 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7025f67-434a-4dba-9b3a-e3b809f5c614-bundle" (OuterVolumeSpecName: "bundle") pod "a7025f67-434a-4dba-9b3a-e3b809f5c614" (UID: "a7025f67-434a-4dba-9b3a-e3b809f5c614"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:25:43 crc kubenswrapper[4775]: I0123 14:25:43.566949 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7025f67-434a-4dba-9b3a-e3b809f5c614-kube-api-access-zfxq4" (OuterVolumeSpecName: "kube-api-access-zfxq4") pod "a7025f67-434a-4dba-9b3a-e3b809f5c614" (UID: "a7025f67-434a-4dba-9b3a-e3b809f5c614"). InnerVolumeSpecName "kube-api-access-zfxq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:25:43 crc kubenswrapper[4775]: I0123 14:25:43.572947 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7025f67-434a-4dba-9b3a-e3b809f5c614-util" (OuterVolumeSpecName: "util") pod "a7025f67-434a-4dba-9b3a-e3b809f5c614" (UID: "a7025f67-434a-4dba-9b3a-e3b809f5c614"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:25:43 crc kubenswrapper[4775]: I0123 14:25:43.661039 4775 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a7025f67-434a-4dba-9b3a-e3b809f5c614-util\") on node \"crc\" DevicePath \"\"" Jan 23 14:25:43 crc kubenswrapper[4775]: I0123 14:25:43.661109 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfxq4\" (UniqueName: \"kubernetes.io/projected/a7025f67-434a-4dba-9b3a-e3b809f5c614-kube-api-access-zfxq4\") on node \"crc\" DevicePath \"\"" Jan 23 14:25:43 crc kubenswrapper[4775]: I0123 14:25:43.661140 4775 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a7025f67-434a-4dba-9b3a-e3b809f5c614-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 14:25:44 crc kubenswrapper[4775]: I0123 14:25:44.103793 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc" event={"ID":"a7025f67-434a-4dba-9b3a-e3b809f5c614","Type":"ContainerDied","Data":"c132cdd71e18904ddaab66994c62997ef1496ddd868c2b3c599059668d98a2dd"} Jan 23 14:25:44 crc kubenswrapper[4775]: I0123 14:25:44.103849 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c132cdd71e18904ddaab66994c62997ef1496ddd868c2b3c599059668d98a2dd" Jan 23 14:25:44 crc kubenswrapper[4775]: I0123 14:25:44.103959 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc" Jan 23 14:25:48 crc kubenswrapper[4775]: I0123 14:25:48.590962 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7c5fcc4cc6-wwr78"] Jan 23 14:25:48 crc kubenswrapper[4775]: E0123 14:25:48.592624 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7025f67-434a-4dba-9b3a-e3b809f5c614" containerName="pull" Jan 23 14:25:48 crc kubenswrapper[4775]: I0123 14:25:48.592712 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7025f67-434a-4dba-9b3a-e3b809f5c614" containerName="pull" Jan 23 14:25:48 crc kubenswrapper[4775]: E0123 14:25:48.592783 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7025f67-434a-4dba-9b3a-e3b809f5c614" containerName="extract" Jan 23 14:25:48 crc kubenswrapper[4775]: I0123 14:25:48.592863 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7025f67-434a-4dba-9b3a-e3b809f5c614" containerName="extract" Jan 23 14:25:48 crc kubenswrapper[4775]: E0123 14:25:48.592924 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7025f67-434a-4dba-9b3a-e3b809f5c614" containerName="util" Jan 23 14:25:48 crc kubenswrapper[4775]: I0123 14:25:48.592981 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7025f67-434a-4dba-9b3a-e3b809f5c614" containerName="util" Jan 23 14:25:48 crc kubenswrapper[4775]: I0123 14:25:48.593197 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7025f67-434a-4dba-9b3a-e3b809f5c614" containerName="extract" Jan 23 14:25:48 crc kubenswrapper[4775]: I0123 14:25:48.593727 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7c5fcc4cc6-wwr78" Jan 23 14:25:48 crc kubenswrapper[4775]: I0123 14:25:48.595695 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-service-cert" Jan 23 14:25:48 crc kubenswrapper[4775]: I0123 14:25:48.610387 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7c5fcc4cc6-wwr78"] Jan 23 14:25:48 crc kubenswrapper[4775]: I0123 14:25:48.612221 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-mh2wz" Jan 23 14:25:48 crc kubenswrapper[4775]: I0123 14:25:48.646149 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/92377252-2e4d-48bb-95ea-724a4ff5c788-apiservice-cert\") pod \"nova-operator-controller-manager-7c5fcc4cc6-wwr78\" (UID: \"92377252-2e4d-48bb-95ea-724a4ff5c788\") " pod="openstack-operators/nova-operator-controller-manager-7c5fcc4cc6-wwr78" Jan 23 14:25:48 crc kubenswrapper[4775]: I0123 14:25:48.646250 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/92377252-2e4d-48bb-95ea-724a4ff5c788-webhook-cert\") pod \"nova-operator-controller-manager-7c5fcc4cc6-wwr78\" (UID: \"92377252-2e4d-48bb-95ea-724a4ff5c788\") " pod="openstack-operators/nova-operator-controller-manager-7c5fcc4cc6-wwr78" Jan 23 14:25:48 crc kubenswrapper[4775]: I0123 14:25:48.646318 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6q45\" (UniqueName: \"kubernetes.io/projected/92377252-2e4d-48bb-95ea-724a4ff5c788-kube-api-access-j6q45\") pod \"nova-operator-controller-manager-7c5fcc4cc6-wwr78\" (UID: \"92377252-2e4d-48bb-95ea-724a4ff5c788\") " pod="openstack-operators/nova-operator-controller-manager-7c5fcc4cc6-wwr78" Jan 23 14:25:48 crc kubenswrapper[4775]: I0123 14:25:48.748195 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/92377252-2e4d-48bb-95ea-724a4ff5c788-webhook-cert\") pod \"nova-operator-controller-manager-7c5fcc4cc6-wwr78\" (UID: \"92377252-2e4d-48bb-95ea-724a4ff5c788\") " pod="openstack-operators/nova-operator-controller-manager-7c5fcc4cc6-wwr78" Jan 23 14:25:48 crc kubenswrapper[4775]: I0123 14:25:48.748325 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6q45\" (UniqueName: \"kubernetes.io/projected/92377252-2e4d-48bb-95ea-724a4ff5c788-kube-api-access-j6q45\") pod \"nova-operator-controller-manager-7c5fcc4cc6-wwr78\" (UID: \"92377252-2e4d-48bb-95ea-724a4ff5c788\") " pod="openstack-operators/nova-operator-controller-manager-7c5fcc4cc6-wwr78" Jan 23 14:25:48 crc kubenswrapper[4775]: I0123 14:25:48.748455 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/92377252-2e4d-48bb-95ea-724a4ff5c788-apiservice-cert\") pod \"nova-operator-controller-manager-7c5fcc4cc6-wwr78\" (UID: \"92377252-2e4d-48bb-95ea-724a4ff5c788\") " pod="openstack-operators/nova-operator-controller-manager-7c5fcc4cc6-wwr78" Jan 23 14:25:48 crc kubenswrapper[4775]: I0123 14:25:48.755442 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/92377252-2e4d-48bb-95ea-724a4ff5c788-apiservice-cert\") pod \"nova-operator-controller-manager-7c5fcc4cc6-wwr78\" (UID: \"92377252-2e4d-48bb-95ea-724a4ff5c788\") " pod="openstack-operators/nova-operator-controller-manager-7c5fcc4cc6-wwr78" Jan 23 14:25:48 crc kubenswrapper[4775]: I0123 14:25:48.757785 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/92377252-2e4d-48bb-95ea-724a4ff5c788-webhook-cert\") pod \"nova-operator-controller-manager-7c5fcc4cc6-wwr78\" (UID: \"92377252-2e4d-48bb-95ea-724a4ff5c788\") " pod="openstack-operators/nova-operator-controller-manager-7c5fcc4cc6-wwr78" Jan 23 14:25:48 crc kubenswrapper[4775]: I0123 14:25:48.762720 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6q45\" (UniqueName: \"kubernetes.io/projected/92377252-2e4d-48bb-95ea-724a4ff5c788-kube-api-access-j6q45\") pod \"nova-operator-controller-manager-7c5fcc4cc6-wwr78\" (UID: \"92377252-2e4d-48bb-95ea-724a4ff5c788\") " pod="openstack-operators/nova-operator-controller-manager-7c5fcc4cc6-wwr78" Jan 23 14:25:48 crc kubenswrapper[4775]: I0123 14:25:48.911309 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7c5fcc4cc6-wwr78" Jan 23 14:25:49 crc kubenswrapper[4775]: I0123 14:25:49.490558 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7c5fcc4cc6-wwr78"] Jan 23 14:25:49 crc kubenswrapper[4775]: W0123 14:25:49.504265 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92377252_2e4d_48bb_95ea_724a4ff5c788.slice/crio-0d0cf92a50069b2429dbd3e094c5d24c5436675eaaaf4fb44d483301c4dbf620 WatchSource:0}: Error finding container 0d0cf92a50069b2429dbd3e094c5d24c5436675eaaaf4fb44d483301c4dbf620: Status 404 returned error can't find the container with id 0d0cf92a50069b2429dbd3e094c5d24c5436675eaaaf4fb44d483301c4dbf620 Jan 23 14:25:50 crc kubenswrapper[4775]: I0123 14:25:50.169677 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7c5fcc4cc6-wwr78" event={"ID":"92377252-2e4d-48bb-95ea-724a4ff5c788","Type":"ContainerStarted","Data":"ca2136b21ddc8d912619d58ffef5ca99beab2de7fc777ad707902d08a38fd5cb"} Jan 23 14:25:50 crc kubenswrapper[4775]: I0123 14:25:50.169976 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7c5fcc4cc6-wwr78" event={"ID":"92377252-2e4d-48bb-95ea-724a4ff5c788","Type":"ContainerStarted","Data":"0d0cf92a50069b2429dbd3e094c5d24c5436675eaaaf4fb44d483301c4dbf620"} Jan 23 14:25:50 crc kubenswrapper[4775]: I0123 14:25:50.170121 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7c5fcc4cc6-wwr78" Jan 23 14:25:50 crc kubenswrapper[4775]: I0123 14:25:50.211537 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7c5fcc4cc6-wwr78" podStartSLOduration=2.211510081 podStartE2EDuration="2.211510081s" podCreationTimestamp="2026-01-23 14:25:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:25:50.203778027 +0000 UTC m=+1297.198606807" watchObservedRunningTime="2026-01-23 14:25:50.211510081 +0000 UTC m=+1297.206338831" Jan 23 14:25:53 crc kubenswrapper[4775]: I0123 14:25:53.218517 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:25:53 crc kubenswrapper[4775]: I0123 14:25:53.218884 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:25:58 crc kubenswrapper[4775]: I0123 14:25:58.917557 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7c5fcc4cc6-wwr78" Jan 23 14:26:23 crc kubenswrapper[4775]: I0123 14:26:23.219592 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:26:23 crc kubenswrapper[4775]: I0123 14:26:23.220111 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:26:23 crc kubenswrapper[4775]: I0123 14:26:23.220155 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:26:23 crc kubenswrapper[4775]: I0123 14:26:23.220748 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a5634c941e351401aed478dd8e700e6d7b7de6241fab2a08ba60719db5eab596"} pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 14:26:23 crc kubenswrapper[4775]: I0123 14:26:23.220798 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" containerID="cri-o://a5634c941e351401aed478dd8e700e6d7b7de6241fab2a08ba60719db5eab596" gracePeriod=600 Jan 23 14:26:23 crc kubenswrapper[4775]: I0123 14:26:23.444154 4775 generic.go:334] "Generic (PLEG): container finished" podID="4fea0767-0566-4214-855d-ed0373946271" containerID="a5634c941e351401aed478dd8e700e6d7b7de6241fab2a08ba60719db5eab596" exitCode=0 Jan 23 14:26:23 crc kubenswrapper[4775]: I0123 14:26:23.444206 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerDied","Data":"a5634c941e351401aed478dd8e700e6d7b7de6241fab2a08ba60719db5eab596"} Jan 23 14:26:23 crc kubenswrapper[4775]: I0123 14:26:23.444245 4775 scope.go:117] "RemoveContainer" containerID="04aeabd8c4a1cb3e5fe85b5d65d741e8a1d8f8a6f9824c7a0b310cfc24829df1" Jan 23 14:26:24 crc kubenswrapper[4775]: I0123 14:26:24.458764 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerStarted","Data":"69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342"} Jan 23 14:26:24 crc kubenswrapper[4775]: I0123 14:26:24.846692 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-db-create-4dbx9"] Jan 23 14:26:24 crc kubenswrapper[4775]: I0123 14:26:24.847604 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-4dbx9" Jan 23 14:26:24 crc kubenswrapper[4775]: I0123 14:26:24.870358 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-4dbx9"] Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.015731 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zczcx\" (UniqueName: \"kubernetes.io/projected/cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff-kube-api-access-zczcx\") pod \"nova-api-db-create-4dbx9\" (UID: \"cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff\") " pod="nova-kuttl-default/nova-api-db-create-4dbx9" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.016198 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff-operator-scripts\") pod \"nova-api-db-create-4dbx9\" (UID: \"cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff\") " pod="nova-kuttl-default/nova-api-db-create-4dbx9" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.046279 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-nvvdc"] Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.047296 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-nvvdc" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.053421 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-74fa-account-create-update-r8n42"] Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.054590 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-74fa-account-create-update-r8n42" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.059302 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-api-db-secret" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.071139 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-nvvdc"] Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.084395 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-74fa-account-create-update-r8n42"] Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.117686 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zczcx\" (UniqueName: \"kubernetes.io/projected/cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff-kube-api-access-zczcx\") pod \"nova-api-db-create-4dbx9\" (UID: \"cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff\") " pod="nova-kuttl-default/nova-api-db-create-4dbx9" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.117746 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff-operator-scripts\") pod \"nova-api-db-create-4dbx9\" (UID: \"cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff\") " pod="nova-kuttl-default/nova-api-db-create-4dbx9" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.118485 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff-operator-scripts\") pod \"nova-api-db-create-4dbx9\" (UID: \"cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff\") " pod="nova-kuttl-default/nova-api-db-create-4dbx9" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.137997 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-q4r8h"] Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.139186 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-q4r8h" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.146727 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-q4r8h"] Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.150112 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zczcx\" (UniqueName: \"kubernetes.io/projected/cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff-kube-api-access-zczcx\") pod \"nova-api-db-create-4dbx9\" (UID: \"cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff\") " pod="nova-kuttl-default/nova-api-db-create-4dbx9" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.172417 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-4dbx9" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.219736 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjrhr\" (UniqueName: \"kubernetes.io/projected/f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4-kube-api-access-bjrhr\") pod \"nova-cell0-db-create-nvvdc\" (UID: \"f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4\") " pod="nova-kuttl-default/nova-cell0-db-create-nvvdc" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.219790 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ed8da8c-1d52-44a3-b1c8-b68000003d91-operator-scripts\") pod \"nova-api-74fa-account-create-update-r8n42\" (UID: \"9ed8da8c-1d52-44a3-b1c8-b68000003d91\") " pod="nova-kuttl-default/nova-api-74fa-account-create-update-r8n42" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.219869 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4-operator-scripts\") pod \"nova-cell0-db-create-nvvdc\" (UID: \"f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4\") " pod="nova-kuttl-default/nova-cell0-db-create-nvvdc" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.219892 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmgkv\" (UniqueName: \"kubernetes.io/projected/9ed8da8c-1d52-44a3-b1c8-b68000003d91-kube-api-access-jmgkv\") pod \"nova-api-74fa-account-create-update-r8n42\" (UID: \"9ed8da8c-1d52-44a3-b1c8-b68000003d91\") " pod="nova-kuttl-default/nova-api-74fa-account-create-update-r8n42" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.247250 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-dec4-account-create-update-thscn"] Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.248204 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-dec4-account-create-update-thscn" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.253759 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell0-db-secret" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.259662 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-dec4-account-create-update-thscn"] Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.321614 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjrhr\" (UniqueName: \"kubernetes.io/projected/f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4-kube-api-access-bjrhr\") pod \"nova-cell0-db-create-nvvdc\" (UID: \"f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4\") " pod="nova-kuttl-default/nova-cell0-db-create-nvvdc" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.321918 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ed8da8c-1d52-44a3-b1c8-b68000003d91-operator-scripts\") pod \"nova-api-74fa-account-create-update-r8n42\" (UID: \"9ed8da8c-1d52-44a3-b1c8-b68000003d91\") " pod="nova-kuttl-default/nova-api-74fa-account-create-update-r8n42" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.321951 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc8t4\" (UniqueName: \"kubernetes.io/projected/26928cf5-7a29-4fab-a501-5746726fc42a-kube-api-access-cc8t4\") pod \"nova-cell1-db-create-q4r8h\" (UID: \"26928cf5-7a29-4fab-a501-5746726fc42a\") " pod="nova-kuttl-default/nova-cell1-db-create-q4r8h" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.321980 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4-operator-scripts\") pod \"nova-cell0-db-create-nvvdc\" (UID: \"f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4\") " pod="nova-kuttl-default/nova-cell0-db-create-nvvdc" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.322009 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmgkv\" (UniqueName: \"kubernetes.io/projected/9ed8da8c-1d52-44a3-b1c8-b68000003d91-kube-api-access-jmgkv\") pod \"nova-api-74fa-account-create-update-r8n42\" (UID: \"9ed8da8c-1d52-44a3-b1c8-b68000003d91\") " pod="nova-kuttl-default/nova-api-74fa-account-create-update-r8n42" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.322032 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26928cf5-7a29-4fab-a501-5746726fc42a-operator-scripts\") pod \"nova-cell1-db-create-q4r8h\" (UID: \"26928cf5-7a29-4fab-a501-5746726fc42a\") " pod="nova-kuttl-default/nova-cell1-db-create-q4r8h" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.322780 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4-operator-scripts\") pod \"nova-cell0-db-create-nvvdc\" (UID: \"f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4\") " pod="nova-kuttl-default/nova-cell0-db-create-nvvdc" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.322900 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ed8da8c-1d52-44a3-b1c8-b68000003d91-operator-scripts\") pod \"nova-api-74fa-account-create-update-r8n42\" (UID: \"9ed8da8c-1d52-44a3-b1c8-b68000003d91\") " pod="nova-kuttl-default/nova-api-74fa-account-create-update-r8n42" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.337794 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjrhr\" (UniqueName: \"kubernetes.io/projected/f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4-kube-api-access-bjrhr\") pod \"nova-cell0-db-create-nvvdc\" (UID: \"f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4\") " pod="nova-kuttl-default/nova-cell0-db-create-nvvdc" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.341364 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmgkv\" (UniqueName: \"kubernetes.io/projected/9ed8da8c-1d52-44a3-b1c8-b68000003d91-kube-api-access-jmgkv\") pod \"nova-api-74fa-account-create-update-r8n42\" (UID: \"9ed8da8c-1d52-44a3-b1c8-b68000003d91\") " pod="nova-kuttl-default/nova-api-74fa-account-create-update-r8n42" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.368467 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-nvvdc" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.376357 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-74fa-account-create-update-r8n42" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.423711 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc8t4\" (UniqueName: \"kubernetes.io/projected/26928cf5-7a29-4fab-a501-5746726fc42a-kube-api-access-cc8t4\") pod \"nova-cell1-db-create-q4r8h\" (UID: \"26928cf5-7a29-4fab-a501-5746726fc42a\") " pod="nova-kuttl-default/nova-cell1-db-create-q4r8h" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.423771 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6k7b\" (UniqueName: \"kubernetes.io/projected/cce1ea66-c6e5-41e7-b0fc-f915fab736f9-kube-api-access-m6k7b\") pod \"nova-cell0-dec4-account-create-update-thscn\" (UID: \"cce1ea66-c6e5-41e7-b0fc-f915fab736f9\") " pod="nova-kuttl-default/nova-cell0-dec4-account-create-update-thscn" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.423820 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cce1ea66-c6e5-41e7-b0fc-f915fab736f9-operator-scripts\") pod \"nova-cell0-dec4-account-create-update-thscn\" (UID: \"cce1ea66-c6e5-41e7-b0fc-f915fab736f9\") " pod="nova-kuttl-default/nova-cell0-dec4-account-create-update-thscn" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.423871 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26928cf5-7a29-4fab-a501-5746726fc42a-operator-scripts\") pod \"nova-cell1-db-create-q4r8h\" (UID: \"26928cf5-7a29-4fab-a501-5746726fc42a\") " pod="nova-kuttl-default/nova-cell1-db-create-q4r8h" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.424699 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26928cf5-7a29-4fab-a501-5746726fc42a-operator-scripts\") pod \"nova-cell1-db-create-q4r8h\" (UID: \"26928cf5-7a29-4fab-a501-5746726fc42a\") " pod="nova-kuttl-default/nova-cell1-db-create-q4r8h" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.442704 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-fcdd-account-create-update-58ttw"] Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.443598 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-fcdd-account-create-update-58ttw" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.447168 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell1-db-secret" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.450093 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc8t4\" (UniqueName: \"kubernetes.io/projected/26928cf5-7a29-4fab-a501-5746726fc42a-kube-api-access-cc8t4\") pod \"nova-cell1-db-create-q4r8h\" (UID: \"26928cf5-7a29-4fab-a501-5746726fc42a\") " pod="nova-kuttl-default/nova-cell1-db-create-q4r8h" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.458117 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-fcdd-account-create-update-58ttw"] Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.493048 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-q4r8h" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.525783 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5980f4a0-814a-4f66-b637-80071a62061b-operator-scripts\") pod \"nova-cell1-fcdd-account-create-update-58ttw\" (UID: \"5980f4a0-814a-4f66-b637-80071a62061b\") " pod="nova-kuttl-default/nova-cell1-fcdd-account-create-update-58ttw" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.525885 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6k7b\" (UniqueName: \"kubernetes.io/projected/cce1ea66-c6e5-41e7-b0fc-f915fab736f9-kube-api-access-m6k7b\") pod \"nova-cell0-dec4-account-create-update-thscn\" (UID: \"cce1ea66-c6e5-41e7-b0fc-f915fab736f9\") " pod="nova-kuttl-default/nova-cell0-dec4-account-create-update-thscn" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.525906 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cce1ea66-c6e5-41e7-b0fc-f915fab736f9-operator-scripts\") pod \"nova-cell0-dec4-account-create-update-thscn\" (UID: \"cce1ea66-c6e5-41e7-b0fc-f915fab736f9\") " pod="nova-kuttl-default/nova-cell0-dec4-account-create-update-thscn" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.525925 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld67n\" (UniqueName: \"kubernetes.io/projected/5980f4a0-814a-4f66-b637-80071a62061b-kube-api-access-ld67n\") pod \"nova-cell1-fcdd-account-create-update-58ttw\" (UID: \"5980f4a0-814a-4f66-b637-80071a62061b\") " pod="nova-kuttl-default/nova-cell1-fcdd-account-create-update-58ttw" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.526857 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cce1ea66-c6e5-41e7-b0fc-f915fab736f9-operator-scripts\") pod \"nova-cell0-dec4-account-create-update-thscn\" (UID: \"cce1ea66-c6e5-41e7-b0fc-f915fab736f9\") " pod="nova-kuttl-default/nova-cell0-dec4-account-create-update-thscn" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.551458 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6k7b\" (UniqueName: \"kubernetes.io/projected/cce1ea66-c6e5-41e7-b0fc-f915fab736f9-kube-api-access-m6k7b\") pod \"nova-cell0-dec4-account-create-update-thscn\" (UID: \"cce1ea66-c6e5-41e7-b0fc-f915fab736f9\") " pod="nova-kuttl-default/nova-cell0-dec4-account-create-update-thscn" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.586670 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-dec4-account-create-update-thscn" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.614882 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-4dbx9"] Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.627961 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5980f4a0-814a-4f66-b637-80071a62061b-operator-scripts\") pod \"nova-cell1-fcdd-account-create-update-58ttw\" (UID: \"5980f4a0-814a-4f66-b637-80071a62061b\") " pod="nova-kuttl-default/nova-cell1-fcdd-account-create-update-58ttw" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.628176 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld67n\" (UniqueName: \"kubernetes.io/projected/5980f4a0-814a-4f66-b637-80071a62061b-kube-api-access-ld67n\") pod \"nova-cell1-fcdd-account-create-update-58ttw\" (UID: \"5980f4a0-814a-4f66-b637-80071a62061b\") " pod="nova-kuttl-default/nova-cell1-fcdd-account-create-update-58ttw" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.628705 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5980f4a0-814a-4f66-b637-80071a62061b-operator-scripts\") pod \"nova-cell1-fcdd-account-create-update-58ttw\" (UID: \"5980f4a0-814a-4f66-b637-80071a62061b\") " pod="nova-kuttl-default/nova-cell1-fcdd-account-create-update-58ttw" Jan 23 14:26:25 crc kubenswrapper[4775]: W0123 14:26:25.628725 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcdce4e03_ab75_4cf0_ae3c_8a9fff7ee6ff.slice/crio-6675dac067de55aea976635a6c1ad021ebe6fd0ca80bad34be3a83c0643d3e01 WatchSource:0}: Error finding container 6675dac067de55aea976635a6c1ad021ebe6fd0ca80bad34be3a83c0643d3e01: Status 404 returned error can't find the container with id 6675dac067de55aea976635a6c1ad021ebe6fd0ca80bad34be3a83c0643d3e01 Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.646835 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld67n\" (UniqueName: \"kubernetes.io/projected/5980f4a0-814a-4f66-b637-80071a62061b-kube-api-access-ld67n\") pod \"nova-cell1-fcdd-account-create-update-58ttw\" (UID: \"5980f4a0-814a-4f66-b637-80071a62061b\") " pod="nova-kuttl-default/nova-cell1-fcdd-account-create-update-58ttw" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.766615 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-fcdd-account-create-update-58ttw" Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.864154 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-74fa-account-create-update-r8n42"] Jan 23 14:26:25 crc kubenswrapper[4775]: W0123 14:26:25.868685 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ed8da8c_1d52_44a3_b1c8_b68000003d91.slice/crio-ff19bfe95f04bdee78af2f96579c263431588006753808b8d98807db6f53fb58 WatchSource:0}: Error finding container ff19bfe95f04bdee78af2f96579c263431588006753808b8d98807db6f53fb58: Status 404 returned error can't find the container with id ff19bfe95f04bdee78af2f96579c263431588006753808b8d98807db6f53fb58 Jan 23 14:26:25 crc kubenswrapper[4775]: I0123 14:26:25.879790 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-nvvdc"] Jan 23 14:26:26 crc kubenswrapper[4775]: I0123 14:26:26.093543 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-q4r8h"] Jan 23 14:26:26 crc kubenswrapper[4775]: I0123 14:26:26.117650 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-dec4-account-create-update-thscn"] Jan 23 14:26:26 crc kubenswrapper[4775]: W0123 14:26:26.123668 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcce1ea66_c6e5_41e7_b0fc_f915fab736f9.slice/crio-b446ca4b2234a49868d0255ccdad7ec3f62cb91c3250e5ec9aa847157d81e7f9 WatchSource:0}: Error finding container b446ca4b2234a49868d0255ccdad7ec3f62cb91c3250e5ec9aa847157d81e7f9: Status 404 returned error can't find the container with id b446ca4b2234a49868d0255ccdad7ec3f62cb91c3250e5ec9aa847157d81e7f9 Jan 23 14:26:26 crc kubenswrapper[4775]: I0123 14:26:26.240189 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-fcdd-account-create-update-58ttw"] Jan 23 14:26:26 crc kubenswrapper[4775]: W0123 14:26:26.244217 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5980f4a0_814a_4f66_b637_80071a62061b.slice/crio-d9a0a938b65febf8e5aacc28e9a6500a18e4d70b0ab6abd58105f05b68b87303 WatchSource:0}: Error finding container d9a0a938b65febf8e5aacc28e9a6500a18e4d70b0ab6abd58105f05b68b87303: Status 404 returned error can't find the container with id d9a0a938b65febf8e5aacc28e9a6500a18e4d70b0ab6abd58105f05b68b87303 Jan 23 14:26:26 crc kubenswrapper[4775]: I0123 14:26:26.484590 4775 generic.go:334] "Generic (PLEG): container finished" podID="26928cf5-7a29-4fab-a501-5746726fc42a" containerID="cf5d6f96b976fd01d4f59841045416396d0e05c1aeb5c738f3b2003a516bd24d" exitCode=0 Jan 23 14:26:26 crc kubenswrapper[4775]: I0123 14:26:26.484641 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-q4r8h" event={"ID":"26928cf5-7a29-4fab-a501-5746726fc42a","Type":"ContainerDied","Data":"cf5d6f96b976fd01d4f59841045416396d0e05c1aeb5c738f3b2003a516bd24d"} Jan 23 14:26:26 crc kubenswrapper[4775]: I0123 14:26:26.484684 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-q4r8h" event={"ID":"26928cf5-7a29-4fab-a501-5746726fc42a","Type":"ContainerStarted","Data":"403c4aa5de9aa5e208f8db7257c1e2c3b3e5e95a2ad0d66c24e99f8a971612f5"} Jan 23 14:26:26 crc kubenswrapper[4775]: I0123 14:26:26.486188 4775 generic.go:334] "Generic (PLEG): container finished" podID="9ed8da8c-1d52-44a3-b1c8-b68000003d91" containerID="ad4721fdee0a09d6f1ae7bbee38e4c36536b30b8fa6aaeaab9d4a101c5700669" exitCode=0 Jan 23 14:26:26 crc kubenswrapper[4775]: I0123 14:26:26.486210 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-74fa-account-create-update-r8n42" event={"ID":"9ed8da8c-1d52-44a3-b1c8-b68000003d91","Type":"ContainerDied","Data":"ad4721fdee0a09d6f1ae7bbee38e4c36536b30b8fa6aaeaab9d4a101c5700669"} Jan 23 14:26:26 crc kubenswrapper[4775]: I0123 14:26:26.486235 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-74fa-account-create-update-r8n42" event={"ID":"9ed8da8c-1d52-44a3-b1c8-b68000003d91","Type":"ContainerStarted","Data":"ff19bfe95f04bdee78af2f96579c263431588006753808b8d98807db6f53fb58"} Jan 23 14:26:26 crc kubenswrapper[4775]: I0123 14:26:26.487658 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-fcdd-account-create-update-58ttw" event={"ID":"5980f4a0-814a-4f66-b637-80071a62061b","Type":"ContainerStarted","Data":"d9a0a938b65febf8e5aacc28e9a6500a18e4d70b0ab6abd58105f05b68b87303"} Jan 23 14:26:26 crc kubenswrapper[4775]: I0123 14:26:26.489979 4775 generic.go:334] "Generic (PLEG): container finished" podID="cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff" containerID="50f2c96b0b5892a7771fccd5951249dad10d9735e71ae46903621151778752dd" exitCode=0 Jan 23 14:26:26 crc kubenswrapper[4775]: I0123 14:26:26.490061 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-4dbx9" event={"ID":"cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff","Type":"ContainerDied","Data":"50f2c96b0b5892a7771fccd5951249dad10d9735e71ae46903621151778752dd"} Jan 23 14:26:26 crc kubenswrapper[4775]: I0123 14:26:26.490099 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-4dbx9" event={"ID":"cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff","Type":"ContainerStarted","Data":"6675dac067de55aea976635a6c1ad021ebe6fd0ca80bad34be3a83c0643d3e01"} Jan 23 14:26:26 crc kubenswrapper[4775]: I0123 14:26:26.491765 4775 generic.go:334] "Generic (PLEG): container finished" podID="cce1ea66-c6e5-41e7-b0fc-f915fab736f9" containerID="8fbaa9880c81768fdeafd7a8d660d5afda75513a9354f9b29aea974cf6c99474" exitCode=0 Jan 23 14:26:26 crc kubenswrapper[4775]: I0123 14:26:26.491842 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-dec4-account-create-update-thscn" event={"ID":"cce1ea66-c6e5-41e7-b0fc-f915fab736f9","Type":"ContainerDied","Data":"8fbaa9880c81768fdeafd7a8d660d5afda75513a9354f9b29aea974cf6c99474"} Jan 23 14:26:26 crc kubenswrapper[4775]: I0123 14:26:26.491864 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-dec4-account-create-update-thscn" event={"ID":"cce1ea66-c6e5-41e7-b0fc-f915fab736f9","Type":"ContainerStarted","Data":"b446ca4b2234a49868d0255ccdad7ec3f62cb91c3250e5ec9aa847157d81e7f9"} Jan 23 14:26:26 crc kubenswrapper[4775]: I0123 14:26:26.493123 4775 generic.go:334] "Generic (PLEG): container finished" podID="f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4" containerID="3089717e59d9d63482e14d904b82257965098590f1b4c79bdacedb05c6060f6e" exitCode=0 Jan 23 14:26:26 crc kubenswrapper[4775]: I0123 14:26:26.493153 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-nvvdc" event={"ID":"f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4","Type":"ContainerDied","Data":"3089717e59d9d63482e14d904b82257965098590f1b4c79bdacedb05c6060f6e"} Jan 23 14:26:26 crc kubenswrapper[4775]: I0123 14:26:26.493169 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-nvvdc" event={"ID":"f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4","Type":"ContainerStarted","Data":"1a3c20c4f02081b2346ab2b36de428d4d6c673b3c82e798cfa24a22c591506b6"} Jan 23 14:26:27 crc kubenswrapper[4775]: I0123 14:26:27.509031 4775 generic.go:334] "Generic (PLEG): container finished" podID="5980f4a0-814a-4f66-b637-80071a62061b" containerID="dfd2790cbd2b3023e0c67bf180e375a19d1caefe130ba7bcb469b97ad55122e0" exitCode=0 Jan 23 14:26:27 crc kubenswrapper[4775]: I0123 14:26:27.509910 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-fcdd-account-create-update-58ttw" event={"ID":"5980f4a0-814a-4f66-b637-80071a62061b","Type":"ContainerDied","Data":"dfd2790cbd2b3023e0c67bf180e375a19d1caefe130ba7bcb469b97ad55122e0"} Jan 23 14:26:27 crc kubenswrapper[4775]: I0123 14:26:27.965701 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-74fa-account-create-update-r8n42" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.086506 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ed8da8c-1d52-44a3-b1c8-b68000003d91-operator-scripts\") pod \"9ed8da8c-1d52-44a3-b1c8-b68000003d91\" (UID: \"9ed8da8c-1d52-44a3-b1c8-b68000003d91\") " Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.086850 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmgkv\" (UniqueName: \"kubernetes.io/projected/9ed8da8c-1d52-44a3-b1c8-b68000003d91-kube-api-access-jmgkv\") pod \"9ed8da8c-1d52-44a3-b1c8-b68000003d91\" (UID: \"9ed8da8c-1d52-44a3-b1c8-b68000003d91\") " Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.087009 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ed8da8c-1d52-44a3-b1c8-b68000003d91-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9ed8da8c-1d52-44a3-b1c8-b68000003d91" (UID: "9ed8da8c-1d52-44a3-b1c8-b68000003d91"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.088015 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9ed8da8c-1d52-44a3-b1c8-b68000003d91-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.092004 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ed8da8c-1d52-44a3-b1c8-b68000003d91-kube-api-access-jmgkv" (OuterVolumeSpecName: "kube-api-access-jmgkv") pod "9ed8da8c-1d52-44a3-b1c8-b68000003d91" (UID: "9ed8da8c-1d52-44a3-b1c8-b68000003d91"). InnerVolumeSpecName "kube-api-access-jmgkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.189307 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmgkv\" (UniqueName: \"kubernetes.io/projected/9ed8da8c-1d52-44a3-b1c8-b68000003d91-kube-api-access-jmgkv\") on node \"crc\" DevicePath \"\"" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.230578 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-q4r8h" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.240723 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-nvvdc" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.256529 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-dec4-account-create-update-thscn" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.271873 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-4dbx9" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.391877 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26928cf5-7a29-4fab-a501-5746726fc42a-operator-scripts\") pod \"26928cf5-7a29-4fab-a501-5746726fc42a\" (UID: \"26928cf5-7a29-4fab-a501-5746726fc42a\") " Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.391935 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff-operator-scripts\") pod \"cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff\" (UID: \"cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff\") " Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.391971 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zczcx\" (UniqueName: \"kubernetes.io/projected/cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff-kube-api-access-zczcx\") pod \"cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff\" (UID: \"cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff\") " Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.392032 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6k7b\" (UniqueName: \"kubernetes.io/projected/cce1ea66-c6e5-41e7-b0fc-f915fab736f9-kube-api-access-m6k7b\") pod \"cce1ea66-c6e5-41e7-b0fc-f915fab736f9\" (UID: \"cce1ea66-c6e5-41e7-b0fc-f915fab736f9\") " Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.392071 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cc8t4\" (UniqueName: \"kubernetes.io/projected/26928cf5-7a29-4fab-a501-5746726fc42a-kube-api-access-cc8t4\") pod \"26928cf5-7a29-4fab-a501-5746726fc42a\" (UID: \"26928cf5-7a29-4fab-a501-5746726fc42a\") " Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.392095 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4-operator-scripts\") pod \"f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4\" (UID: \"f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4\") " Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.392126 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjrhr\" (UniqueName: \"kubernetes.io/projected/f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4-kube-api-access-bjrhr\") pod \"f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4\" (UID: \"f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4\") " Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.392148 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cce1ea66-c6e5-41e7-b0fc-f915fab736f9-operator-scripts\") pod \"cce1ea66-c6e5-41e7-b0fc-f915fab736f9\" (UID: \"cce1ea66-c6e5-41e7-b0fc-f915fab736f9\") " Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.392412 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26928cf5-7a29-4fab-a501-5746726fc42a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "26928cf5-7a29-4fab-a501-5746726fc42a" (UID: "26928cf5-7a29-4fab-a501-5746726fc42a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.392616 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26928cf5-7a29-4fab-a501-5746726fc42a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.392794 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cce1ea66-c6e5-41e7-b0fc-f915fab736f9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cce1ea66-c6e5-41e7-b0fc-f915fab736f9" (UID: "cce1ea66-c6e5-41e7-b0fc-f915fab736f9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.393515 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff" (UID: "cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.393754 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4" (UID: "f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.396029 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cce1ea66-c6e5-41e7-b0fc-f915fab736f9-kube-api-access-m6k7b" (OuterVolumeSpecName: "kube-api-access-m6k7b") pod "cce1ea66-c6e5-41e7-b0fc-f915fab736f9" (UID: "cce1ea66-c6e5-41e7-b0fc-f915fab736f9"). InnerVolumeSpecName "kube-api-access-m6k7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.396101 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff-kube-api-access-zczcx" (OuterVolumeSpecName: "kube-api-access-zczcx") pod "cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff" (UID: "cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff"). InnerVolumeSpecName "kube-api-access-zczcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.396288 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26928cf5-7a29-4fab-a501-5746726fc42a-kube-api-access-cc8t4" (OuterVolumeSpecName: "kube-api-access-cc8t4") pod "26928cf5-7a29-4fab-a501-5746726fc42a" (UID: "26928cf5-7a29-4fab-a501-5746726fc42a"). InnerVolumeSpecName "kube-api-access-cc8t4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.396388 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4-kube-api-access-bjrhr" (OuterVolumeSpecName: "kube-api-access-bjrhr") pod "f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4" (UID: "f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4"). InnerVolumeSpecName "kube-api-access-bjrhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.494659 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.495124 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zczcx\" (UniqueName: \"kubernetes.io/projected/cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff-kube-api-access-zczcx\") on node \"crc\" DevicePath \"\"" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.495150 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6k7b\" (UniqueName: \"kubernetes.io/projected/cce1ea66-c6e5-41e7-b0fc-f915fab736f9-kube-api-access-m6k7b\") on node \"crc\" DevicePath \"\"" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.495170 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cc8t4\" (UniqueName: \"kubernetes.io/projected/26928cf5-7a29-4fab-a501-5746726fc42a-kube-api-access-cc8t4\") on node \"crc\" DevicePath \"\"" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.495190 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.495209 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjrhr\" (UniqueName: \"kubernetes.io/projected/f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4-kube-api-access-bjrhr\") on node \"crc\" DevicePath \"\"" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.495230 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cce1ea66-c6e5-41e7-b0fc-f915fab736f9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.519538 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-q4r8h" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.519531 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-q4r8h" event={"ID":"26928cf5-7a29-4fab-a501-5746726fc42a","Type":"ContainerDied","Data":"403c4aa5de9aa5e208f8db7257c1e2c3b3e5e95a2ad0d66c24e99f8a971612f5"} Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.519687 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="403c4aa5de9aa5e208f8db7257c1e2c3b3e5e95a2ad0d66c24e99f8a971612f5" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.523414 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-74fa-account-create-update-r8n42" event={"ID":"9ed8da8c-1d52-44a3-b1c8-b68000003d91","Type":"ContainerDied","Data":"ff19bfe95f04bdee78af2f96579c263431588006753808b8d98807db6f53fb58"} Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.523445 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff19bfe95f04bdee78af2f96579c263431588006753808b8d98807db6f53fb58" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.523508 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-74fa-account-create-update-r8n42" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.527305 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-4dbx9" event={"ID":"cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff","Type":"ContainerDied","Data":"6675dac067de55aea976635a6c1ad021ebe6fd0ca80bad34be3a83c0643d3e01"} Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.527344 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6675dac067de55aea976635a6c1ad021ebe6fd0ca80bad34be3a83c0643d3e01" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.527344 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-4dbx9" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.529175 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-dec4-account-create-update-thscn" event={"ID":"cce1ea66-c6e5-41e7-b0fc-f915fab736f9","Type":"ContainerDied","Data":"b446ca4b2234a49868d0255ccdad7ec3f62cb91c3250e5ec9aa847157d81e7f9"} Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.529208 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b446ca4b2234a49868d0255ccdad7ec3f62cb91c3250e5ec9aa847157d81e7f9" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.529248 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-dec4-account-create-update-thscn" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.530916 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-nvvdc" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.531044 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-nvvdc" event={"ID":"f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4","Type":"ContainerDied","Data":"1a3c20c4f02081b2346ab2b36de428d4d6c673b3c82e798cfa24a22c591506b6"} Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.531377 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a3c20c4f02081b2346ab2b36de428d4d6c673b3c82e798cfa24a22c591506b6" Jan 23 14:26:28 crc kubenswrapper[4775]: I0123 14:26:28.838696 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-fcdd-account-create-update-58ttw" Jan 23 14:26:29 crc kubenswrapper[4775]: I0123 14:26:29.002128 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ld67n\" (UniqueName: \"kubernetes.io/projected/5980f4a0-814a-4f66-b637-80071a62061b-kube-api-access-ld67n\") pod \"5980f4a0-814a-4f66-b637-80071a62061b\" (UID: \"5980f4a0-814a-4f66-b637-80071a62061b\") " Jan 23 14:26:29 crc kubenswrapper[4775]: I0123 14:26:29.002348 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5980f4a0-814a-4f66-b637-80071a62061b-operator-scripts\") pod \"5980f4a0-814a-4f66-b637-80071a62061b\" (UID: \"5980f4a0-814a-4f66-b637-80071a62061b\") " Jan 23 14:26:29 crc kubenswrapper[4775]: I0123 14:26:29.003560 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5980f4a0-814a-4f66-b637-80071a62061b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5980f4a0-814a-4f66-b637-80071a62061b" (UID: "5980f4a0-814a-4f66-b637-80071a62061b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:26:29 crc kubenswrapper[4775]: I0123 14:26:29.008670 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5980f4a0-814a-4f66-b637-80071a62061b-kube-api-access-ld67n" (OuterVolumeSpecName: "kube-api-access-ld67n") pod "5980f4a0-814a-4f66-b637-80071a62061b" (UID: "5980f4a0-814a-4f66-b637-80071a62061b"). InnerVolumeSpecName "kube-api-access-ld67n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:26:29 crc kubenswrapper[4775]: I0123 14:26:29.104187 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5980f4a0-814a-4f66-b637-80071a62061b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:26:29 crc kubenswrapper[4775]: I0123 14:26:29.104241 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ld67n\" (UniqueName: \"kubernetes.io/projected/5980f4a0-814a-4f66-b637-80071a62061b-kube-api-access-ld67n\") on node \"crc\" DevicePath \"\"" Jan 23 14:26:29 crc kubenswrapper[4775]: I0123 14:26:29.544523 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-fcdd-account-create-update-58ttw" event={"ID":"5980f4a0-814a-4f66-b637-80071a62061b","Type":"ContainerDied","Data":"d9a0a938b65febf8e5aacc28e9a6500a18e4d70b0ab6abd58105f05b68b87303"} Jan 23 14:26:29 crc kubenswrapper[4775]: I0123 14:26:29.544581 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9a0a938b65febf8e5aacc28e9a6500a18e4d70b0ab6abd58105f05b68b87303" Jan 23 14:26:29 crc kubenswrapper[4775]: I0123 14:26:29.544590 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-fcdd-account-create-update-58ttw" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.609993 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76"] Jan 23 14:26:30 crc kubenswrapper[4775]: E0123 14:26:30.610588 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5980f4a0-814a-4f66-b637-80071a62061b" containerName="mariadb-account-create-update" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.610604 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="5980f4a0-814a-4f66-b637-80071a62061b" containerName="mariadb-account-create-update" Jan 23 14:26:30 crc kubenswrapper[4775]: E0123 14:26:30.610623 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cce1ea66-c6e5-41e7-b0fc-f915fab736f9" containerName="mariadb-account-create-update" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.610633 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="cce1ea66-c6e5-41e7-b0fc-f915fab736f9" containerName="mariadb-account-create-update" Jan 23 14:26:30 crc kubenswrapper[4775]: E0123 14:26:30.610648 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26928cf5-7a29-4fab-a501-5746726fc42a" containerName="mariadb-database-create" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.610656 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="26928cf5-7a29-4fab-a501-5746726fc42a" containerName="mariadb-database-create" Jan 23 14:26:30 crc kubenswrapper[4775]: E0123 14:26:30.610703 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff" containerName="mariadb-database-create" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.610711 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff" containerName="mariadb-database-create" Jan 23 14:26:30 crc kubenswrapper[4775]: E0123 14:26:30.610749 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ed8da8c-1d52-44a3-b1c8-b68000003d91" containerName="mariadb-account-create-update" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.610757 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ed8da8c-1d52-44a3-b1c8-b68000003d91" containerName="mariadb-account-create-update" Jan 23 14:26:30 crc kubenswrapper[4775]: E0123 14:26:30.610775 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4" containerName="mariadb-database-create" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.610783 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4" containerName="mariadb-database-create" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.611280 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="26928cf5-7a29-4fab-a501-5746726fc42a" containerName="mariadb-database-create" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.611308 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="cce1ea66-c6e5-41e7-b0fc-f915fab736f9" containerName="mariadb-account-create-update" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.611321 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ed8da8c-1d52-44a3-b1c8-b68000003d91" containerName="mariadb-account-create-update" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.611332 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4" containerName="mariadb-database-create" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.611343 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff" containerName="mariadb-database-create" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.611354 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="5980f4a0-814a-4f66-b637-80071a62061b" containerName="mariadb-account-create-update" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.611997 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.613970 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.613970 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-scripts" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.614870 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-nova-kuttl-dockercfg-42x4x" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.644496 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76"] Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.729778 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccfqr\" (UniqueName: \"kubernetes.io/projected/5c069034-d3fc-478b-a45d-2d6c64baf640-kube-api-access-ccfqr\") pod \"nova-kuttl-cell0-conductor-db-sync-jhf76\" (UID: \"5c069034-d3fc-478b-a45d-2d6c64baf640\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.729876 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c069034-d3fc-478b-a45d-2d6c64baf640-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-jhf76\" (UID: \"5c069034-d3fc-478b-a45d-2d6c64baf640\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.729959 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c069034-d3fc-478b-a45d-2d6c64baf640-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-jhf76\" (UID: \"5c069034-d3fc-478b-a45d-2d6c64baf640\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.831063 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c069034-d3fc-478b-a45d-2d6c64baf640-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-jhf76\" (UID: \"5c069034-d3fc-478b-a45d-2d6c64baf640\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.831375 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c069034-d3fc-478b-a45d-2d6c64baf640-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-jhf76\" (UID: \"5c069034-d3fc-478b-a45d-2d6c64baf640\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.831503 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccfqr\" (UniqueName: \"kubernetes.io/projected/5c069034-d3fc-478b-a45d-2d6c64baf640-kube-api-access-ccfqr\") pod \"nova-kuttl-cell0-conductor-db-sync-jhf76\" (UID: \"5c069034-d3fc-478b-a45d-2d6c64baf640\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.838871 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c069034-d3fc-478b-a45d-2d6c64baf640-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-jhf76\" (UID: \"5c069034-d3fc-478b-a45d-2d6c64baf640\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.839006 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c069034-d3fc-478b-a45d-2d6c64baf640-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-jhf76\" (UID: \"5c069034-d3fc-478b-a45d-2d6c64baf640\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.868294 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccfqr\" (UniqueName: \"kubernetes.io/projected/5c069034-d3fc-478b-a45d-2d6c64baf640-kube-api-access-ccfqr\") pod \"nova-kuttl-cell0-conductor-db-sync-jhf76\" (UID: \"5c069034-d3fc-478b-a45d-2d6c64baf640\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76" Jan 23 14:26:30 crc kubenswrapper[4775]: I0123 14:26:30.931971 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76" Jan 23 14:26:31 crc kubenswrapper[4775]: W0123 14:26:31.386183 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c069034_d3fc_478b_a45d_2d6c64baf640.slice/crio-da12d81f8906ac962615e076bffe5a6abbc13a285f304b96c6b6c46896b583b6 WatchSource:0}: Error finding container da12d81f8906ac962615e076bffe5a6abbc13a285f304b96c6b6c46896b583b6: Status 404 returned error can't find the container with id da12d81f8906ac962615e076bffe5a6abbc13a285f304b96c6b6c46896b583b6 Jan 23 14:26:31 crc kubenswrapper[4775]: I0123 14:26:31.387083 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76"] Jan 23 14:26:31 crc kubenswrapper[4775]: I0123 14:26:31.565132 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76" event={"ID":"5c069034-d3fc-478b-a45d-2d6c64baf640","Type":"ContainerStarted","Data":"da12d81f8906ac962615e076bffe5a6abbc13a285f304b96c6b6c46896b583b6"} Jan 23 14:26:40 crc kubenswrapper[4775]: I0123 14:26:40.655301 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76" event={"ID":"5c069034-d3fc-478b-a45d-2d6c64baf640","Type":"ContainerStarted","Data":"16a5d90dc00db76cb146a3ab929aa58cbca67687a4216b85575b35f06530fd3a"} Jan 23 14:26:40 crc kubenswrapper[4775]: I0123 14:26:40.681253 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76" podStartSLOduration=2.379133384 podStartE2EDuration="10.681236869s" podCreationTimestamp="2026-01-23 14:26:30 +0000 UTC" firstStartedPulling="2026-01-23 14:26:31.388592866 +0000 UTC m=+1338.383421616" lastFinishedPulling="2026-01-23 14:26:39.690696361 +0000 UTC m=+1346.685525101" observedRunningTime="2026-01-23 14:26:40.677744128 +0000 UTC m=+1347.672572868" watchObservedRunningTime="2026-01-23 14:26:40.681236869 +0000 UTC m=+1347.676065609" Jan 23 14:26:51 crc kubenswrapper[4775]: I0123 14:26:51.773180 4775 generic.go:334] "Generic (PLEG): container finished" podID="5c069034-d3fc-478b-a45d-2d6c64baf640" containerID="16a5d90dc00db76cb146a3ab929aa58cbca67687a4216b85575b35f06530fd3a" exitCode=0 Jan 23 14:26:51 crc kubenswrapper[4775]: I0123 14:26:51.773264 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76" event={"ID":"5c069034-d3fc-478b-a45d-2d6c64baf640","Type":"ContainerDied","Data":"16a5d90dc00db76cb146a3ab929aa58cbca67687a4216b85575b35f06530fd3a"} Jan 23 14:26:53 crc kubenswrapper[4775]: I0123 14:26:53.101409 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76" Jan 23 14:26:53 crc kubenswrapper[4775]: I0123 14:26:53.145935 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c069034-d3fc-478b-a45d-2d6c64baf640-scripts\") pod \"5c069034-d3fc-478b-a45d-2d6c64baf640\" (UID: \"5c069034-d3fc-478b-a45d-2d6c64baf640\") " Jan 23 14:26:53 crc kubenswrapper[4775]: I0123 14:26:53.146371 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c069034-d3fc-478b-a45d-2d6c64baf640-config-data\") pod \"5c069034-d3fc-478b-a45d-2d6c64baf640\" (UID: \"5c069034-d3fc-478b-a45d-2d6c64baf640\") " Jan 23 14:26:53 crc kubenswrapper[4775]: I0123 14:26:53.146547 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccfqr\" (UniqueName: \"kubernetes.io/projected/5c069034-d3fc-478b-a45d-2d6c64baf640-kube-api-access-ccfqr\") pod \"5c069034-d3fc-478b-a45d-2d6c64baf640\" (UID: \"5c069034-d3fc-478b-a45d-2d6c64baf640\") " Jan 23 14:26:53 crc kubenswrapper[4775]: I0123 14:26:53.153909 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c069034-d3fc-478b-a45d-2d6c64baf640-kube-api-access-ccfqr" (OuterVolumeSpecName: "kube-api-access-ccfqr") pod "5c069034-d3fc-478b-a45d-2d6c64baf640" (UID: "5c069034-d3fc-478b-a45d-2d6c64baf640"). InnerVolumeSpecName "kube-api-access-ccfqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:26:53 crc kubenswrapper[4775]: I0123 14:26:53.161026 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c069034-d3fc-478b-a45d-2d6c64baf640-scripts" (OuterVolumeSpecName: "scripts") pod "5c069034-d3fc-478b-a45d-2d6c64baf640" (UID: "5c069034-d3fc-478b-a45d-2d6c64baf640"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:26:53 crc kubenswrapper[4775]: I0123 14:26:53.186608 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c069034-d3fc-478b-a45d-2d6c64baf640-config-data" (OuterVolumeSpecName: "config-data") pod "5c069034-d3fc-478b-a45d-2d6c64baf640" (UID: "5c069034-d3fc-478b-a45d-2d6c64baf640"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:26:53 crc kubenswrapper[4775]: I0123 14:26:53.248454 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccfqr\" (UniqueName: \"kubernetes.io/projected/5c069034-d3fc-478b-a45d-2d6c64baf640-kube-api-access-ccfqr\") on node \"crc\" DevicePath \"\"" Jan 23 14:26:53 crc kubenswrapper[4775]: I0123 14:26:53.248493 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c069034-d3fc-478b-a45d-2d6c64baf640-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:26:53 crc kubenswrapper[4775]: I0123 14:26:53.248510 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c069034-d3fc-478b-a45d-2d6c64baf640-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:26:53 crc kubenswrapper[4775]: I0123 14:26:53.794543 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76" event={"ID":"5c069034-d3fc-478b-a45d-2d6c64baf640","Type":"ContainerDied","Data":"da12d81f8906ac962615e076bffe5a6abbc13a285f304b96c6b6c46896b583b6"} Jan 23 14:26:53 crc kubenswrapper[4775]: I0123 14:26:53.794599 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da12d81f8906ac962615e076bffe5a6abbc13a285f304b96c6b6c46896b583b6" Jan 23 14:26:53 crc kubenswrapper[4775]: I0123 14:26:53.794678 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76" Jan 23 14:26:53 crc kubenswrapper[4775]: I0123 14:26:53.932647 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:26:53 crc kubenswrapper[4775]: E0123 14:26:53.933148 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c069034-d3fc-478b-a45d-2d6c64baf640" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 23 14:26:53 crc kubenswrapper[4775]: I0123 14:26:53.933180 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c069034-d3fc-478b-a45d-2d6c64baf640" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 23 14:26:53 crc kubenswrapper[4775]: I0123 14:26:53.933430 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c069034-d3fc-478b-a45d-2d6c64baf640" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 23 14:26:53 crc kubenswrapper[4775]: I0123 14:26:53.934094 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:26:53 crc kubenswrapper[4775]: I0123 14:26:53.936304 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-nova-kuttl-dockercfg-42x4x" Jan 23 14:26:53 crc kubenswrapper[4775]: I0123 14:26:53.937421 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 23 14:26:53 crc kubenswrapper[4775]: I0123 14:26:53.954266 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:26:54 crc kubenswrapper[4775]: I0123 14:26:54.059900 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c5ea649-3ec6-4684-a543-92cbb2561c2c-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"5c5ea649-3ec6-4684-a543-92cbb2561c2c\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:26:54 crc kubenswrapper[4775]: I0123 14:26:54.060228 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk566\" (UniqueName: \"kubernetes.io/projected/5c5ea649-3ec6-4684-a543-92cbb2561c2c-kube-api-access-qk566\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"5c5ea649-3ec6-4684-a543-92cbb2561c2c\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:26:54 crc kubenswrapper[4775]: I0123 14:26:54.161706 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk566\" (UniqueName: \"kubernetes.io/projected/5c5ea649-3ec6-4684-a543-92cbb2561c2c-kube-api-access-qk566\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"5c5ea649-3ec6-4684-a543-92cbb2561c2c\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:26:54 crc kubenswrapper[4775]: I0123 14:26:54.161859 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c5ea649-3ec6-4684-a543-92cbb2561c2c-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"5c5ea649-3ec6-4684-a543-92cbb2561c2c\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:26:54 crc kubenswrapper[4775]: I0123 14:26:54.168919 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c5ea649-3ec6-4684-a543-92cbb2561c2c-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"5c5ea649-3ec6-4684-a543-92cbb2561c2c\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:26:54 crc kubenswrapper[4775]: I0123 14:26:54.180913 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk566\" (UniqueName: \"kubernetes.io/projected/5c5ea649-3ec6-4684-a543-92cbb2561c2c-kube-api-access-qk566\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"5c5ea649-3ec6-4684-a543-92cbb2561c2c\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:26:54 crc kubenswrapper[4775]: I0123 14:26:54.251291 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:26:55 crc kubenswrapper[4775]: I0123 14:26:54.553079 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:26:55 crc kubenswrapper[4775]: W0123 14:26:54.560851 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c5ea649_3ec6_4684_a543_92cbb2561c2c.slice/crio-aae6c41a06b90b700f10ac781242a8cc1f26c49368ae3d0b71804b4f7c54253a WatchSource:0}: Error finding container aae6c41a06b90b700f10ac781242a8cc1f26c49368ae3d0b71804b4f7c54253a: Status 404 returned error can't find the container with id aae6c41a06b90b700f10ac781242a8cc1f26c49368ae3d0b71804b4f7c54253a Jan 23 14:26:55 crc kubenswrapper[4775]: I0123 14:26:54.806174 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"5c5ea649-3ec6-4684-a543-92cbb2561c2c","Type":"ContainerStarted","Data":"0fc3116ad5e11a579023342a2bde7e94e9992b7817bc89662a590eddceef91c7"} Jan 23 14:26:55 crc kubenswrapper[4775]: I0123 14:26:54.807223 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:26:55 crc kubenswrapper[4775]: I0123 14:26:54.807239 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"5c5ea649-3ec6-4684-a543-92cbb2561c2c","Type":"ContainerStarted","Data":"aae6c41a06b90b700f10ac781242a8cc1f26c49368ae3d0b71804b4f7c54253a"} Jan 23 14:26:59 crc kubenswrapper[4775]: I0123 14:26:59.296233 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:26:59 crc kubenswrapper[4775]: I0123 14:26:59.325586 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podStartSLOduration=6.325555491 podStartE2EDuration="6.325555491s" podCreationTimestamp="2026-01-23 14:26:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:26:54.822643489 +0000 UTC m=+1361.817472239" watchObservedRunningTime="2026-01-23 14:26:59.325555491 +0000 UTC m=+1366.320384271" Jan 23 14:26:59 crc kubenswrapper[4775]: I0123 14:26:59.767853 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf"] Jan 23 14:26:59 crc kubenswrapper[4775]: I0123 14:26:59.769124 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf" Jan 23 14:26:59 crc kubenswrapper[4775]: I0123 14:26:59.771731 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-config-data" Jan 23 14:26:59 crc kubenswrapper[4775]: I0123 14:26:59.777357 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-scripts" Jan 23 14:26:59 crc kubenswrapper[4775]: I0123 14:26:59.785710 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf"] Jan 23 14:26:59 crc kubenswrapper[4775]: I0123 14:26:59.971968 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4b500f0-4005-40b9-a54d-0769cc8717f0-config-data\") pod \"nova-kuttl-cell0-cell-mapping-bgpzf\" (UID: \"e4b500f0-4005-40b9-a54d-0769cc8717f0\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf" Jan 23 14:26:59 crc kubenswrapper[4775]: I0123 14:26:59.972040 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4b500f0-4005-40b9-a54d-0769cc8717f0-scripts\") pod \"nova-kuttl-cell0-cell-mapping-bgpzf\" (UID: \"e4b500f0-4005-40b9-a54d-0769cc8717f0\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf" Jan 23 14:26:59 crc kubenswrapper[4775]: I0123 14:26:59.972115 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgft6\" (UniqueName: \"kubernetes.io/projected/e4b500f0-4005-40b9-a54d-0769cc8717f0-kube-api-access-xgft6\") pod \"nova-kuttl-cell0-cell-mapping-bgpzf\" (UID: \"e4b500f0-4005-40b9-a54d-0769cc8717f0\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.040872 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.042067 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.043651 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.050254 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.071543 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.072554 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.072779 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4b500f0-4005-40b9-a54d-0769cc8717f0-config-data\") pod \"nova-kuttl-cell0-cell-mapping-bgpzf\" (UID: \"e4b500f0-4005-40b9-a54d-0769cc8717f0\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.072871 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4b500f0-4005-40b9-a54d-0769cc8717f0-scripts\") pod \"nova-kuttl-cell0-cell-mapping-bgpzf\" (UID: \"e4b500f0-4005-40b9-a54d-0769cc8717f0\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.072920 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgft6\" (UniqueName: \"kubernetes.io/projected/e4b500f0-4005-40b9-a54d-0769cc8717f0-kube-api-access-xgft6\") pod \"nova-kuttl-cell0-cell-mapping-bgpzf\" (UID: \"e4b500f0-4005-40b9-a54d-0769cc8717f0\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.079244 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.082294 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4b500f0-4005-40b9-a54d-0769cc8717f0-config-data\") pod \"nova-kuttl-cell0-cell-mapping-bgpzf\" (UID: \"e4b500f0-4005-40b9-a54d-0769cc8717f0\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.083782 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.087302 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4b500f0-4005-40b9-a54d-0769cc8717f0-scripts\") pod \"nova-kuttl-cell0-cell-mapping-bgpzf\" (UID: \"e4b500f0-4005-40b9-a54d-0769cc8717f0\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.117464 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgft6\" (UniqueName: \"kubernetes.io/projected/e4b500f0-4005-40b9-a54d-0769cc8717f0-kube-api-access-xgft6\") pod \"nova-kuttl-cell0-cell-mapping-bgpzf\" (UID: \"e4b500f0-4005-40b9-a54d-0769cc8717f0\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.131079 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.139964 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.141390 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.144192 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.155646 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.176172 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade3732b-4731-4318-a3ef-7c97825a71ed-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"ade3732b-4731-4318-a3ef-7c97825a71ed\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.176451 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2a774c2-1605-4329-bd98-fba72cd66171-config-data\") pod \"nova-kuttl-api-0\" (UID: \"d2a774c2-1605-4329-bd98-fba72cd66171\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.176475 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd12f6cf-eef0-4d55-8500-2d64ed9e7648-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"cd12f6cf-eef0-4d55-8500-2d64ed9e7648\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.176515 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ade3732b-4731-4318-a3ef-7c97825a71ed-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"ade3732b-4731-4318-a3ef-7c97825a71ed\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.176546 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbqdv\" (UniqueName: \"kubernetes.io/projected/cd12f6cf-eef0-4d55-8500-2d64ed9e7648-kube-api-access-bbqdv\") pod \"nova-kuttl-scheduler-0\" (UID: \"cd12f6cf-eef0-4d55-8500-2d64ed9e7648\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.176574 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx22d\" (UniqueName: \"kubernetes.io/projected/ade3732b-4731-4318-a3ef-7c97825a71ed-kube-api-access-bx22d\") pod \"nova-kuttl-metadata-0\" (UID: \"ade3732b-4731-4318-a3ef-7c97825a71ed\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.176667 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2a774c2-1605-4329-bd98-fba72cd66171-logs\") pod \"nova-kuttl-api-0\" (UID: \"d2a774c2-1605-4329-bd98-fba72cd66171\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.176707 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzp69\" (UniqueName: \"kubernetes.io/projected/d2a774c2-1605-4329-bd98-fba72cd66171-kube-api-access-zzp69\") pod \"nova-kuttl-api-0\" (UID: \"d2a774c2-1605-4329-bd98-fba72cd66171\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.221986 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.223199 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.228224 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-novncproxy-config-data" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.234192 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.277669 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbqdv\" (UniqueName: \"kubernetes.io/projected/cd12f6cf-eef0-4d55-8500-2d64ed9e7648-kube-api-access-bbqdv\") pod \"nova-kuttl-scheduler-0\" (UID: \"cd12f6cf-eef0-4d55-8500-2d64ed9e7648\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.277715 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx22d\" (UniqueName: \"kubernetes.io/projected/ade3732b-4731-4318-a3ef-7c97825a71ed-kube-api-access-bx22d\") pod \"nova-kuttl-metadata-0\" (UID: \"ade3732b-4731-4318-a3ef-7c97825a71ed\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.278183 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2a774c2-1605-4329-bd98-fba72cd66171-logs\") pod \"nova-kuttl-api-0\" (UID: \"d2a774c2-1605-4329-bd98-fba72cd66171\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.278343 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzp69\" (UniqueName: \"kubernetes.io/projected/d2a774c2-1605-4329-bd98-fba72cd66171-kube-api-access-zzp69\") pod \"nova-kuttl-api-0\" (UID: \"d2a774c2-1605-4329-bd98-fba72cd66171\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.278379 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade3732b-4731-4318-a3ef-7c97825a71ed-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"ade3732b-4731-4318-a3ef-7c97825a71ed\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.278413 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2a774c2-1605-4329-bd98-fba72cd66171-config-data\") pod \"nova-kuttl-api-0\" (UID: \"d2a774c2-1605-4329-bd98-fba72cd66171\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.278960 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2a774c2-1605-4329-bd98-fba72cd66171-logs\") pod \"nova-kuttl-api-0\" (UID: \"d2a774c2-1605-4329-bd98-fba72cd66171\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.278967 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd12f6cf-eef0-4d55-8500-2d64ed9e7648-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"cd12f6cf-eef0-4d55-8500-2d64ed9e7648\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.279019 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ade3732b-4731-4318-a3ef-7c97825a71ed-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"ade3732b-4731-4318-a3ef-7c97825a71ed\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.279457 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ade3732b-4731-4318-a3ef-7c97825a71ed-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"ade3732b-4731-4318-a3ef-7c97825a71ed\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.296510 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade3732b-4731-4318-a3ef-7c97825a71ed-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"ade3732b-4731-4318-a3ef-7c97825a71ed\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.297007 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd12f6cf-eef0-4d55-8500-2d64ed9e7648-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"cd12f6cf-eef0-4d55-8500-2d64ed9e7648\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.297968 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx22d\" (UniqueName: \"kubernetes.io/projected/ade3732b-4731-4318-a3ef-7c97825a71ed-kube-api-access-bx22d\") pod \"nova-kuttl-metadata-0\" (UID: \"ade3732b-4731-4318-a3ef-7c97825a71ed\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.298303 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbqdv\" (UniqueName: \"kubernetes.io/projected/cd12f6cf-eef0-4d55-8500-2d64ed9e7648-kube-api-access-bbqdv\") pod \"nova-kuttl-scheduler-0\" (UID: \"cd12f6cf-eef0-4d55-8500-2d64ed9e7648\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.300038 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2a774c2-1605-4329-bd98-fba72cd66171-config-data\") pod \"nova-kuttl-api-0\" (UID: \"d2a774c2-1605-4329-bd98-fba72cd66171\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.300068 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzp69\" (UniqueName: \"kubernetes.io/projected/d2a774c2-1605-4329-bd98-fba72cd66171-kube-api-access-zzp69\") pod \"nova-kuttl-api-0\" (UID: \"d2a774c2-1605-4329-bd98-fba72cd66171\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.372964 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.383314 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msdnd\" (UniqueName: \"kubernetes.io/projected/d6487ecc-f390-4837-8097-15e1b0bc28ac-kube-api-access-msdnd\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"d6487ecc-f390-4837-8097-15e1b0bc28ac\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.383401 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6487ecc-f390-4837-8097-15e1b0bc28ac-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"d6487ecc-f390-4837-8097-15e1b0bc28ac\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.485577 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msdnd\" (UniqueName: \"kubernetes.io/projected/d6487ecc-f390-4837-8097-15e1b0bc28ac-kube-api-access-msdnd\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"d6487ecc-f390-4837-8097-15e1b0bc28ac\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.485654 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6487ecc-f390-4837-8097-15e1b0bc28ac-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"d6487ecc-f390-4837-8097-15e1b0bc28ac\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.490795 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6487ecc-f390-4837-8097-15e1b0bc28ac-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"d6487ecc-f390-4837-8097-15e1b0bc28ac\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.503562 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msdnd\" (UniqueName: \"kubernetes.io/projected/d6487ecc-f390-4837-8097-15e1b0bc28ac-kube-api-access-msdnd\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"d6487ecc-f390-4837-8097-15e1b0bc28ac\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.520369 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.529879 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.542504 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.644131 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf"] Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.682865 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl"] Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.683800 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.687833 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.687962 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-scripts" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.691605 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/470fdecf-a054-4735-90e9-82e8f2df7393-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-jnchl\" (UID: \"470fdecf-a054-4735-90e9-82e8f2df7393\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.691648 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnsz8\" (UniqueName: \"kubernetes.io/projected/470fdecf-a054-4735-90e9-82e8f2df7393-kube-api-access-xnsz8\") pod \"nova-kuttl-cell1-conductor-db-sync-jnchl\" (UID: \"470fdecf-a054-4735-90e9-82e8f2df7393\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.691765 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/470fdecf-a054-4735-90e9-82e8f2df7393-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-jnchl\" (UID: \"470fdecf-a054-4735-90e9-82e8f2df7393\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.721993 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl"] Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.797722 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/470fdecf-a054-4735-90e9-82e8f2df7393-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-jnchl\" (UID: \"470fdecf-a054-4735-90e9-82e8f2df7393\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.797773 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnsz8\" (UniqueName: \"kubernetes.io/projected/470fdecf-a054-4735-90e9-82e8f2df7393-kube-api-access-xnsz8\") pod \"nova-kuttl-cell1-conductor-db-sync-jnchl\" (UID: \"470fdecf-a054-4735-90e9-82e8f2df7393\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.797851 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/470fdecf-a054-4735-90e9-82e8f2df7393-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-jnchl\" (UID: \"470fdecf-a054-4735-90e9-82e8f2df7393\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.803363 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/470fdecf-a054-4735-90e9-82e8f2df7393-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-jnchl\" (UID: \"470fdecf-a054-4735-90e9-82e8f2df7393\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.812736 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.815028 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/470fdecf-a054-4735-90e9-82e8f2df7393-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-jnchl\" (UID: \"470fdecf-a054-4735-90e9-82e8f2df7393\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.818943 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnsz8\" (UniqueName: \"kubernetes.io/projected/470fdecf-a054-4735-90e9-82e8f2df7393-kube-api-access-xnsz8\") pod \"nova-kuttl-cell1-conductor-db-sync-jnchl\" (UID: \"470fdecf-a054-4735-90e9-82e8f2df7393\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl" Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.869474 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"d2a774c2-1605-4329-bd98-fba72cd66171","Type":"ContainerStarted","Data":"68f512301f6d964a7e5e33ce512013bee3b54f46f7a054e898c3f9210e426230"} Jan 23 14:27:00 crc kubenswrapper[4775]: I0123 14:27:00.870851 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf" event={"ID":"e4b500f0-4005-40b9-a54d-0769cc8717f0","Type":"ContainerStarted","Data":"dfed5acd49d6415be2734162e5acd7ffb8af9234ab858619c7b284e2c7ee456d"} Jan 23 14:27:01 crc kubenswrapper[4775]: I0123 14:27:01.006577 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl" Jan 23 14:27:01 crc kubenswrapper[4775]: I0123 14:27:01.052373 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:27:01 crc kubenswrapper[4775]: I0123 14:27:01.079986 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 23 14:27:01 crc kubenswrapper[4775]: W0123 14:27:01.102796 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd12f6cf_eef0_4d55_8500_2d64ed9e7648.slice/crio-c74b107de095453d19a75391e5aae3a435d1e6489cec783e11c3fb51cedba1a5 WatchSource:0}: Error finding container c74b107de095453d19a75391e5aae3a435d1e6489cec783e11c3fb51cedba1a5: Status 404 returned error can't find the container with id c74b107de095453d19a75391e5aae3a435d1e6489cec783e11c3fb51cedba1a5 Jan 23 14:27:01 crc kubenswrapper[4775]: I0123 14:27:01.107876 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:27:01 crc kubenswrapper[4775]: I0123 14:27:01.501398 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl"] Jan 23 14:27:01 crc kubenswrapper[4775]: I0123 14:27:01.886538 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl" event={"ID":"470fdecf-a054-4735-90e9-82e8f2df7393","Type":"ContainerStarted","Data":"4416e85269b1c4f191cdc1bfa52a3e5ae7f058b4bf7a7282d8bc2d3b5f93f115"} Jan 23 14:27:01 crc kubenswrapper[4775]: I0123 14:27:01.886965 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl" event={"ID":"470fdecf-a054-4735-90e9-82e8f2df7393","Type":"ContainerStarted","Data":"188e84ad5e9b447be9a639852503c5b0f8e66bee963af4f23bdc811b6b604dc2"} Jan 23 14:27:01 crc kubenswrapper[4775]: I0123 14:27:01.888346 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"d6487ecc-f390-4837-8097-15e1b0bc28ac","Type":"ContainerStarted","Data":"f3a42cea8fd58140cfe12473c775a1de35761c7ed3cab47b52b03cbea0efb84b"} Jan 23 14:27:01 crc kubenswrapper[4775]: I0123 14:27:01.889937 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf" event={"ID":"e4b500f0-4005-40b9-a54d-0769cc8717f0","Type":"ContainerStarted","Data":"204b70c75b108eb876b17c40860b15870affa382adc84f2a27cb048cf9061fa7"} Jan 23 14:27:01 crc kubenswrapper[4775]: I0123 14:27:01.893121 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"ade3732b-4731-4318-a3ef-7c97825a71ed","Type":"ContainerStarted","Data":"ed764791e32d9123ae4beaa7c6d7c2307e2b1a91e61e749a1d2402749b2f21a1"} Jan 23 14:27:01 crc kubenswrapper[4775]: I0123 14:27:01.894688 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"cd12f6cf-eef0-4d55-8500-2d64ed9e7648","Type":"ContainerStarted","Data":"c74b107de095453d19a75391e5aae3a435d1e6489cec783e11c3fb51cedba1a5"} Jan 23 14:27:01 crc kubenswrapper[4775]: I0123 14:27:01.905262 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl" podStartSLOduration=1.9052463880000001 podStartE2EDuration="1.905246388s" podCreationTimestamp="2026-01-23 14:27:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:27:01.899550593 +0000 UTC m=+1368.894379343" watchObservedRunningTime="2026-01-23 14:27:01.905246388 +0000 UTC m=+1368.900075128" Jan 23 14:27:01 crc kubenswrapper[4775]: I0123 14:27:01.920128 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf" podStartSLOduration=2.920075926 podStartE2EDuration="2.920075926s" podCreationTimestamp="2026-01-23 14:26:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:27:01.911603361 +0000 UTC m=+1368.906432101" watchObservedRunningTime="2026-01-23 14:27:01.920075926 +0000 UTC m=+1368.914904676" Jan 23 14:27:04 crc kubenswrapper[4775]: I0123 14:27:04.922558 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"ade3732b-4731-4318-a3ef-7c97825a71ed","Type":"ContainerStarted","Data":"e8c03f2602d77c8ca3745e6c2244bff91717e346a9b95fbcc514b69a6b8800a1"} Jan 23 14:27:04 crc kubenswrapper[4775]: I0123 14:27:04.922920 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"ade3732b-4731-4318-a3ef-7c97825a71ed","Type":"ContainerStarted","Data":"f9e6dd6ee748332259544493b056a08476ca7d32e51149aad4b5a5a844d829de"} Jan 23 14:27:04 crc kubenswrapper[4775]: I0123 14:27:04.925151 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"cd12f6cf-eef0-4d55-8500-2d64ed9e7648","Type":"ContainerStarted","Data":"fbe0abac4e6cee8d6565dd2b6582cfcf62e3451343c85ba596566cd55c678a15"} Jan 23 14:27:04 crc kubenswrapper[4775]: I0123 14:27:04.927310 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"d6487ecc-f390-4837-8097-15e1b0bc28ac","Type":"ContainerStarted","Data":"e9cd293241d6fb23305cd22644b9ba266d18f24d704393111b6fac686f6c275a"} Jan 23 14:27:04 crc kubenswrapper[4775]: I0123 14:27:04.929150 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"d2a774c2-1605-4329-bd98-fba72cd66171","Type":"ContainerStarted","Data":"b1323eb8233cdc66240b6926d63d1feb92fb82144a35db2eb3de8b31d2ed9216"} Jan 23 14:27:04 crc kubenswrapper[4775]: I0123 14:27:04.929180 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"d2a774c2-1605-4329-bd98-fba72cd66171","Type":"ContainerStarted","Data":"f164453e6525dbf91c410ed65de38718006a315ec35c8899d2915cfcd1ef2980"} Jan 23 14:27:04 crc kubenswrapper[4775]: I0123 14:27:04.949268 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=2.009893812 podStartE2EDuration="4.949247014s" podCreationTimestamp="2026-01-23 14:27:00 +0000 UTC" firstStartedPulling="2026-01-23 14:27:01.06594812 +0000 UTC m=+1368.060776860" lastFinishedPulling="2026-01-23 14:27:04.005301322 +0000 UTC m=+1371.000130062" observedRunningTime="2026-01-23 14:27:04.938082901 +0000 UTC m=+1371.932911651" watchObservedRunningTime="2026-01-23 14:27:04.949247014 +0000 UTC m=+1371.944075764" Jan 23 14:27:04 crc kubenswrapper[4775]: I0123 14:27:04.966646 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=1.798131851 podStartE2EDuration="4.966626356s" podCreationTimestamp="2026-01-23 14:27:00 +0000 UTC" firstStartedPulling="2026-01-23 14:27:00.807963084 +0000 UTC m=+1367.802791824" lastFinishedPulling="2026-01-23 14:27:03.976457599 +0000 UTC m=+1370.971286329" observedRunningTime="2026-01-23 14:27:04.964417743 +0000 UTC m=+1371.959246483" watchObservedRunningTime="2026-01-23 14:27:04.966626356 +0000 UTC m=+1371.961455096" Jan 23 14:27:04 crc kubenswrapper[4775]: I0123 14:27:04.982590 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=2.103174548 podStartE2EDuration="4.982574477s" podCreationTimestamp="2026-01-23 14:27:00 +0000 UTC" firstStartedPulling="2026-01-23 14:27:01.111128806 +0000 UTC m=+1368.105957546" lastFinishedPulling="2026-01-23 14:27:03.990528735 +0000 UTC m=+1370.985357475" observedRunningTime="2026-01-23 14:27:04.975195234 +0000 UTC m=+1371.970023974" watchObservedRunningTime="2026-01-23 14:27:04.982574477 +0000 UTC m=+1371.977403217" Jan 23 14:27:04 crc kubenswrapper[4775]: I0123 14:27:04.992085 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podStartSLOduration=2.084008784 podStartE2EDuration="4.992070392s" podCreationTimestamp="2026-01-23 14:27:00 +0000 UTC" firstStartedPulling="2026-01-23 14:27:01.097258565 +0000 UTC m=+1368.092087305" lastFinishedPulling="2026-01-23 14:27:04.005320133 +0000 UTC m=+1371.000148913" observedRunningTime="2026-01-23 14:27:04.990894018 +0000 UTC m=+1371.985722778" watchObservedRunningTime="2026-01-23 14:27:04.992070392 +0000 UTC m=+1371.986899132" Jan 23 14:27:05 crc kubenswrapper[4775]: I0123 14:27:05.521559 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:05 crc kubenswrapper[4775]: I0123 14:27:05.530948 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:05 crc kubenswrapper[4775]: I0123 14:27:05.531057 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:05 crc kubenswrapper[4775]: I0123 14:27:05.543616 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:27:07 crc kubenswrapper[4775]: I0123 14:27:07.965335 4775 generic.go:334] "Generic (PLEG): container finished" podID="e4b500f0-4005-40b9-a54d-0769cc8717f0" containerID="204b70c75b108eb876b17c40860b15870affa382adc84f2a27cb048cf9061fa7" exitCode=0 Jan 23 14:27:07 crc kubenswrapper[4775]: I0123 14:27:07.965439 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf" event={"ID":"e4b500f0-4005-40b9-a54d-0769cc8717f0","Type":"ContainerDied","Data":"204b70c75b108eb876b17c40860b15870affa382adc84f2a27cb048cf9061fa7"} Jan 23 14:27:08 crc kubenswrapper[4775]: I0123 14:27:08.978627 4775 generic.go:334] "Generic (PLEG): container finished" podID="470fdecf-a054-4735-90e9-82e8f2df7393" containerID="4416e85269b1c4f191cdc1bfa52a3e5ae7f058b4bf7a7282d8bc2d3b5f93f115" exitCode=0 Jan 23 14:27:08 crc kubenswrapper[4775]: I0123 14:27:08.978753 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl" event={"ID":"470fdecf-a054-4735-90e9-82e8f2df7393","Type":"ContainerDied","Data":"4416e85269b1c4f191cdc1bfa52a3e5ae7f058b4bf7a7282d8bc2d3b5f93f115"} Jan 23 14:27:09 crc kubenswrapper[4775]: I0123 14:27:09.389401 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf" Jan 23 14:27:09 crc kubenswrapper[4775]: I0123 14:27:09.458395 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgft6\" (UniqueName: \"kubernetes.io/projected/e4b500f0-4005-40b9-a54d-0769cc8717f0-kube-api-access-xgft6\") pod \"e4b500f0-4005-40b9-a54d-0769cc8717f0\" (UID: \"e4b500f0-4005-40b9-a54d-0769cc8717f0\") " Jan 23 14:27:09 crc kubenswrapper[4775]: I0123 14:27:09.458504 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4b500f0-4005-40b9-a54d-0769cc8717f0-scripts\") pod \"e4b500f0-4005-40b9-a54d-0769cc8717f0\" (UID: \"e4b500f0-4005-40b9-a54d-0769cc8717f0\") " Jan 23 14:27:09 crc kubenswrapper[4775]: I0123 14:27:09.458621 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4b500f0-4005-40b9-a54d-0769cc8717f0-config-data\") pod \"e4b500f0-4005-40b9-a54d-0769cc8717f0\" (UID: \"e4b500f0-4005-40b9-a54d-0769cc8717f0\") " Jan 23 14:27:09 crc kubenswrapper[4775]: I0123 14:27:09.465871 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4b500f0-4005-40b9-a54d-0769cc8717f0-scripts" (OuterVolumeSpecName: "scripts") pod "e4b500f0-4005-40b9-a54d-0769cc8717f0" (UID: "e4b500f0-4005-40b9-a54d-0769cc8717f0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:27:09 crc kubenswrapper[4775]: I0123 14:27:09.466837 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4b500f0-4005-40b9-a54d-0769cc8717f0-kube-api-access-xgft6" (OuterVolumeSpecName: "kube-api-access-xgft6") pod "e4b500f0-4005-40b9-a54d-0769cc8717f0" (UID: "e4b500f0-4005-40b9-a54d-0769cc8717f0"). InnerVolumeSpecName "kube-api-access-xgft6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:27:09 crc kubenswrapper[4775]: I0123 14:27:09.500527 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4b500f0-4005-40b9-a54d-0769cc8717f0-config-data" (OuterVolumeSpecName: "config-data") pod "e4b500f0-4005-40b9-a54d-0769cc8717f0" (UID: "e4b500f0-4005-40b9-a54d-0769cc8717f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:27:09 crc kubenswrapper[4775]: I0123 14:27:09.560947 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4b500f0-4005-40b9-a54d-0769cc8717f0-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:09 crc kubenswrapper[4775]: I0123 14:27:09.560981 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgft6\" (UniqueName: \"kubernetes.io/projected/e4b500f0-4005-40b9-a54d-0769cc8717f0-kube-api-access-xgft6\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:09 crc kubenswrapper[4775]: I0123 14:27:09.560994 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e4b500f0-4005-40b9-a54d-0769cc8717f0-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:09 crc kubenswrapper[4775]: I0123 14:27:09.990850 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf" event={"ID":"e4b500f0-4005-40b9-a54d-0769cc8717f0","Type":"ContainerDied","Data":"dfed5acd49d6415be2734162e5acd7ffb8af9234ab858619c7b284e2c7ee456d"} Jan 23 14:27:09 crc kubenswrapper[4775]: I0123 14:27:09.991274 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfed5acd49d6415be2734162e5acd7ffb8af9234ab858619c7b284e2c7ee456d" Jan 23 14:27:09 crc kubenswrapper[4775]: I0123 14:27:09.990995 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.186339 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.186606 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="d2a774c2-1605-4329-bd98-fba72cd66171" containerName="nova-kuttl-api-log" containerID="cri-o://f164453e6525dbf91c410ed65de38718006a315ec35c8899d2915cfcd1ef2980" gracePeriod=30 Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.186771 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="d2a774c2-1605-4329-bd98-fba72cd66171" containerName="nova-kuttl-api-api" containerID="cri-o://b1323eb8233cdc66240b6926d63d1feb92fb82144a35db2eb3de8b31d2ed9216" gracePeriod=30 Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.231339 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.231534 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="cd12f6cf-eef0-4d55-8500-2d64ed9e7648" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://fbe0abac4e6cee8d6565dd2b6582cfcf62e3451343c85ba596566cd55c678a15" gracePeriod=30 Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.256059 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.256268 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="ade3732b-4731-4318-a3ef-7c97825a71ed" containerName="nova-kuttl-metadata-log" containerID="cri-o://f9e6dd6ee748332259544493b056a08476ca7d32e51149aad4b5a5a844d829de" gracePeriod=30 Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.256381 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="ade3732b-4731-4318-a3ef-7c97825a71ed" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://e8c03f2602d77c8ca3745e6c2244bff91717e346a9b95fbcc514b69a6b8800a1" gracePeriod=30 Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.292158 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.373437 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/470fdecf-a054-4735-90e9-82e8f2df7393-scripts\") pod \"470fdecf-a054-4735-90e9-82e8f2df7393\" (UID: \"470fdecf-a054-4735-90e9-82e8f2df7393\") " Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.373529 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/470fdecf-a054-4735-90e9-82e8f2df7393-config-data\") pod \"470fdecf-a054-4735-90e9-82e8f2df7393\" (UID: \"470fdecf-a054-4735-90e9-82e8f2df7393\") " Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.373580 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnsz8\" (UniqueName: \"kubernetes.io/projected/470fdecf-a054-4735-90e9-82e8f2df7393-kube-api-access-xnsz8\") pod \"470fdecf-a054-4735-90e9-82e8f2df7393\" (UID: \"470fdecf-a054-4735-90e9-82e8f2df7393\") " Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.376915 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/470fdecf-a054-4735-90e9-82e8f2df7393-kube-api-access-xnsz8" (OuterVolumeSpecName: "kube-api-access-xnsz8") pod "470fdecf-a054-4735-90e9-82e8f2df7393" (UID: "470fdecf-a054-4735-90e9-82e8f2df7393"). InnerVolumeSpecName "kube-api-access-xnsz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.377512 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/470fdecf-a054-4735-90e9-82e8f2df7393-scripts" (OuterVolumeSpecName: "scripts") pod "470fdecf-a054-4735-90e9-82e8f2df7393" (UID: "470fdecf-a054-4735-90e9-82e8f2df7393"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.408989 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/470fdecf-a054-4735-90e9-82e8f2df7393-config-data" (OuterVolumeSpecName: "config-data") pod "470fdecf-a054-4735-90e9-82e8f2df7393" (UID: "470fdecf-a054-4735-90e9-82e8f2df7393"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.475697 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/470fdecf-a054-4735-90e9-82e8f2df7393-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.475739 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/470fdecf-a054-4735-90e9-82e8f2df7393-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.475759 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnsz8\" (UniqueName: \"kubernetes.io/projected/470fdecf-a054-4735-90e9-82e8f2df7393-kube-api-access-xnsz8\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.549510 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.581635 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.812517 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.858791 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.880609 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzp69\" (UniqueName: \"kubernetes.io/projected/d2a774c2-1605-4329-bd98-fba72cd66171-kube-api-access-zzp69\") pod \"d2a774c2-1605-4329-bd98-fba72cd66171\" (UID: \"d2a774c2-1605-4329-bd98-fba72cd66171\") " Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.880670 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2a774c2-1605-4329-bd98-fba72cd66171-logs\") pod \"d2a774c2-1605-4329-bd98-fba72cd66171\" (UID: \"d2a774c2-1605-4329-bd98-fba72cd66171\") " Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.880696 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bx22d\" (UniqueName: \"kubernetes.io/projected/ade3732b-4731-4318-a3ef-7c97825a71ed-kube-api-access-bx22d\") pod \"ade3732b-4731-4318-a3ef-7c97825a71ed\" (UID: \"ade3732b-4731-4318-a3ef-7c97825a71ed\") " Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.880739 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2a774c2-1605-4329-bd98-fba72cd66171-config-data\") pod \"d2a774c2-1605-4329-bd98-fba72cd66171\" (UID: \"d2a774c2-1605-4329-bd98-fba72cd66171\") " Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.880756 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ade3732b-4731-4318-a3ef-7c97825a71ed-logs\") pod \"ade3732b-4731-4318-a3ef-7c97825a71ed\" (UID: \"ade3732b-4731-4318-a3ef-7c97825a71ed\") " Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.880799 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade3732b-4731-4318-a3ef-7c97825a71ed-config-data\") pod \"ade3732b-4731-4318-a3ef-7c97825a71ed\" (UID: \"ade3732b-4731-4318-a3ef-7c97825a71ed\") " Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.881869 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ade3732b-4731-4318-a3ef-7c97825a71ed-logs" (OuterVolumeSpecName: "logs") pod "ade3732b-4731-4318-a3ef-7c97825a71ed" (UID: "ade3732b-4731-4318-a3ef-7c97825a71ed"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.882015 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2a774c2-1605-4329-bd98-fba72cd66171-logs" (OuterVolumeSpecName: "logs") pod "d2a774c2-1605-4329-bd98-fba72cd66171" (UID: "d2a774c2-1605-4329-bd98-fba72cd66171"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.885668 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2a774c2-1605-4329-bd98-fba72cd66171-kube-api-access-zzp69" (OuterVolumeSpecName: "kube-api-access-zzp69") pod "d2a774c2-1605-4329-bd98-fba72cd66171" (UID: "d2a774c2-1605-4329-bd98-fba72cd66171"). InnerVolumeSpecName "kube-api-access-zzp69". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.886199 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ade3732b-4731-4318-a3ef-7c97825a71ed-kube-api-access-bx22d" (OuterVolumeSpecName: "kube-api-access-bx22d") pod "ade3732b-4731-4318-a3ef-7c97825a71ed" (UID: "ade3732b-4731-4318-a3ef-7c97825a71ed"). InnerVolumeSpecName "kube-api-access-bx22d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.900507 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ade3732b-4731-4318-a3ef-7c97825a71ed-config-data" (OuterVolumeSpecName: "config-data") pod "ade3732b-4731-4318-a3ef-7c97825a71ed" (UID: "ade3732b-4731-4318-a3ef-7c97825a71ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.907460 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2a774c2-1605-4329-bd98-fba72cd66171-config-data" (OuterVolumeSpecName: "config-data") pod "d2a774c2-1605-4329-bd98-fba72cd66171" (UID: "d2a774c2-1605-4329-bd98-fba72cd66171"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.982660 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzp69\" (UniqueName: \"kubernetes.io/projected/d2a774c2-1605-4329-bd98-fba72cd66171-kube-api-access-zzp69\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.982685 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2a774c2-1605-4329-bd98-fba72cd66171-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.982695 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bx22d\" (UniqueName: \"kubernetes.io/projected/ade3732b-4731-4318-a3ef-7c97825a71ed-kube-api-access-bx22d\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.982704 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ade3732b-4731-4318-a3ef-7c97825a71ed-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.982713 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2a774c2-1605-4329-bd98-fba72cd66171-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:10 crc kubenswrapper[4775]: I0123 14:27:10.982721 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ade3732b-4731-4318-a3ef-7c97825a71ed-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.001070 4775 generic.go:334] "Generic (PLEG): container finished" podID="d2a774c2-1605-4329-bd98-fba72cd66171" containerID="b1323eb8233cdc66240b6926d63d1feb92fb82144a35db2eb3de8b31d2ed9216" exitCode=0 Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.002505 4775 generic.go:334] "Generic (PLEG): container finished" podID="d2a774c2-1605-4329-bd98-fba72cd66171" containerID="f164453e6525dbf91c410ed65de38718006a315ec35c8899d2915cfcd1ef2980" exitCode=143 Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.001165 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.001138 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"d2a774c2-1605-4329-bd98-fba72cd66171","Type":"ContainerDied","Data":"b1323eb8233cdc66240b6926d63d1feb92fb82144a35db2eb3de8b31d2ed9216"} Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.002818 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"d2a774c2-1605-4329-bd98-fba72cd66171","Type":"ContainerDied","Data":"f164453e6525dbf91c410ed65de38718006a315ec35c8899d2915cfcd1ef2980"} Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.002844 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"d2a774c2-1605-4329-bd98-fba72cd66171","Type":"ContainerDied","Data":"68f512301f6d964a7e5e33ce512013bee3b54f46f7a054e898c3f9210e426230"} Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.002863 4775 scope.go:117] "RemoveContainer" containerID="b1323eb8233cdc66240b6926d63d1feb92fb82144a35db2eb3de8b31d2ed9216" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.007374 4775 generic.go:334] "Generic (PLEG): container finished" podID="ade3732b-4731-4318-a3ef-7c97825a71ed" containerID="e8c03f2602d77c8ca3745e6c2244bff91717e346a9b95fbcc514b69a6b8800a1" exitCode=0 Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.007397 4775 generic.go:334] "Generic (PLEG): container finished" podID="ade3732b-4731-4318-a3ef-7c97825a71ed" containerID="f9e6dd6ee748332259544493b056a08476ca7d32e51149aad4b5a5a844d829de" exitCode=143 Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.007440 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"ade3732b-4731-4318-a3ef-7c97825a71ed","Type":"ContainerDied","Data":"e8c03f2602d77c8ca3745e6c2244bff91717e346a9b95fbcc514b69a6b8800a1"} Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.007463 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"ade3732b-4731-4318-a3ef-7c97825a71ed","Type":"ContainerDied","Data":"f9e6dd6ee748332259544493b056a08476ca7d32e51149aad4b5a5a844d829de"} Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.007473 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"ade3732b-4731-4318-a3ef-7c97825a71ed","Type":"ContainerDied","Data":"ed764791e32d9123ae4beaa7c6d7c2307e2b1a91e61e749a1d2402749b2f21a1"} Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.007672 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.009065 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl" event={"ID":"470fdecf-a054-4735-90e9-82e8f2df7393","Type":"ContainerDied","Data":"188e84ad5e9b447be9a639852503c5b0f8e66bee963af4f23bdc811b6b604dc2"} Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.009090 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="188e84ad5e9b447be9a639852503c5b0f8e66bee963af4f23bdc811b6b604dc2" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.009112 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.020173 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.032580 4775 scope.go:117] "RemoveContainer" containerID="f164453e6525dbf91c410ed65de38718006a315ec35c8899d2915cfcd1ef2980" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.073713 4775 scope.go:117] "RemoveContainer" containerID="b1323eb8233cdc66240b6926d63d1feb92fb82144a35db2eb3de8b31d2ed9216" Jan 23 14:27:11 crc kubenswrapper[4775]: E0123 14:27:11.082879 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1323eb8233cdc66240b6926d63d1feb92fb82144a35db2eb3de8b31d2ed9216\": container with ID starting with b1323eb8233cdc66240b6926d63d1feb92fb82144a35db2eb3de8b31d2ed9216 not found: ID does not exist" containerID="b1323eb8233cdc66240b6926d63d1feb92fb82144a35db2eb3de8b31d2ed9216" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.082922 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1323eb8233cdc66240b6926d63d1feb92fb82144a35db2eb3de8b31d2ed9216"} err="failed to get container status \"b1323eb8233cdc66240b6926d63d1feb92fb82144a35db2eb3de8b31d2ed9216\": rpc error: code = NotFound desc = could not find container \"b1323eb8233cdc66240b6926d63d1feb92fb82144a35db2eb3de8b31d2ed9216\": container with ID starting with b1323eb8233cdc66240b6926d63d1feb92fb82144a35db2eb3de8b31d2ed9216 not found: ID does not exist" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.082949 4775 scope.go:117] "RemoveContainer" containerID="f164453e6525dbf91c410ed65de38718006a315ec35c8899d2915cfcd1ef2980" Jan 23 14:27:11 crc kubenswrapper[4775]: E0123 14:27:11.085384 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f164453e6525dbf91c410ed65de38718006a315ec35c8899d2915cfcd1ef2980\": container with ID starting with f164453e6525dbf91c410ed65de38718006a315ec35c8899d2915cfcd1ef2980 not found: ID does not exist" containerID="f164453e6525dbf91c410ed65de38718006a315ec35c8899d2915cfcd1ef2980" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.085541 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f164453e6525dbf91c410ed65de38718006a315ec35c8899d2915cfcd1ef2980"} err="failed to get container status \"f164453e6525dbf91c410ed65de38718006a315ec35c8899d2915cfcd1ef2980\": rpc error: code = NotFound desc = could not find container \"f164453e6525dbf91c410ed65de38718006a315ec35c8899d2915cfcd1ef2980\": container with ID starting with f164453e6525dbf91c410ed65de38718006a315ec35c8899d2915cfcd1ef2980 not found: ID does not exist" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.085562 4775 scope.go:117] "RemoveContainer" containerID="b1323eb8233cdc66240b6926d63d1feb92fb82144a35db2eb3de8b31d2ed9216" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.086747 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1323eb8233cdc66240b6926d63d1feb92fb82144a35db2eb3de8b31d2ed9216"} err="failed to get container status \"b1323eb8233cdc66240b6926d63d1feb92fb82144a35db2eb3de8b31d2ed9216\": rpc error: code = NotFound desc = could not find container \"b1323eb8233cdc66240b6926d63d1feb92fb82144a35db2eb3de8b31d2ed9216\": container with ID starting with b1323eb8233cdc66240b6926d63d1feb92fb82144a35db2eb3de8b31d2ed9216 not found: ID does not exist" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.086766 4775 scope.go:117] "RemoveContainer" containerID="f164453e6525dbf91c410ed65de38718006a315ec35c8899d2915cfcd1ef2980" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.087586 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f164453e6525dbf91c410ed65de38718006a315ec35c8899d2915cfcd1ef2980"} err="failed to get container status \"f164453e6525dbf91c410ed65de38718006a315ec35c8899d2915cfcd1ef2980\": rpc error: code = NotFound desc = could not find container \"f164453e6525dbf91c410ed65de38718006a315ec35c8899d2915cfcd1ef2980\": container with ID starting with f164453e6525dbf91c410ed65de38718006a315ec35c8899d2915cfcd1ef2980 not found: ID does not exist" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.087635 4775 scope.go:117] "RemoveContainer" containerID="e8c03f2602d77c8ca3745e6c2244bff91717e346a9b95fbcc514b69a6b8800a1" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.106905 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.123773 4775 scope.go:117] "RemoveContainer" containerID="f9e6dd6ee748332259544493b056a08476ca7d32e51149aad4b5a5a844d829de" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.127546 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.132786 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.140849 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:27:11 crc kubenswrapper[4775]: E0123 14:27:11.141239 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2a774c2-1605-4329-bd98-fba72cd66171" containerName="nova-kuttl-api-api" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.141257 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2a774c2-1605-4329-bd98-fba72cd66171" containerName="nova-kuttl-api-api" Jan 23 14:27:11 crc kubenswrapper[4775]: E0123 14:27:11.141273 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4b500f0-4005-40b9-a54d-0769cc8717f0" containerName="nova-manage" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.141280 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4b500f0-4005-40b9-a54d-0769cc8717f0" containerName="nova-manage" Jan 23 14:27:11 crc kubenswrapper[4775]: E0123 14:27:11.141296 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade3732b-4731-4318-a3ef-7c97825a71ed" containerName="nova-kuttl-metadata-log" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.141303 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade3732b-4731-4318-a3ef-7c97825a71ed" containerName="nova-kuttl-metadata-log" Jan 23 14:27:11 crc kubenswrapper[4775]: E0123 14:27:11.141314 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2a774c2-1605-4329-bd98-fba72cd66171" containerName="nova-kuttl-api-log" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.141320 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2a774c2-1605-4329-bd98-fba72cd66171" containerName="nova-kuttl-api-log" Jan 23 14:27:11 crc kubenswrapper[4775]: E0123 14:27:11.141334 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade3732b-4731-4318-a3ef-7c97825a71ed" containerName="nova-kuttl-metadata-metadata" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.141340 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade3732b-4731-4318-a3ef-7c97825a71ed" containerName="nova-kuttl-metadata-metadata" Jan 23 14:27:11 crc kubenswrapper[4775]: E0123 14:27:11.141351 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="470fdecf-a054-4735-90e9-82e8f2df7393" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.141357 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="470fdecf-a054-4735-90e9-82e8f2df7393" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.141495 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="ade3732b-4731-4318-a3ef-7c97825a71ed" containerName="nova-kuttl-metadata-metadata" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.141507 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2a774c2-1605-4329-bd98-fba72cd66171" containerName="nova-kuttl-api-api" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.141518 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="ade3732b-4731-4318-a3ef-7c97825a71ed" containerName="nova-kuttl-metadata-log" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.141528 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2a774c2-1605-4329-bd98-fba72cd66171" containerName="nova-kuttl-api-log" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.141537 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="470fdecf-a054-4735-90e9-82e8f2df7393" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.141548 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4b500f0-4005-40b9-a54d-0769cc8717f0" containerName="nova-manage" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.160519 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.160675 4775 scope.go:117] "RemoveContainer" containerID="e8c03f2602d77c8ca3745e6c2244bff91717e346a9b95fbcc514b69a6b8800a1" Jan 23 14:27:11 crc kubenswrapper[4775]: E0123 14:27:11.161205 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8c03f2602d77c8ca3745e6c2244bff91717e346a9b95fbcc514b69a6b8800a1\": container with ID starting with e8c03f2602d77c8ca3745e6c2244bff91717e346a9b95fbcc514b69a6b8800a1 not found: ID does not exist" containerID="e8c03f2602d77c8ca3745e6c2244bff91717e346a9b95fbcc514b69a6b8800a1" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.161235 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8c03f2602d77c8ca3745e6c2244bff91717e346a9b95fbcc514b69a6b8800a1"} err="failed to get container status \"e8c03f2602d77c8ca3745e6c2244bff91717e346a9b95fbcc514b69a6b8800a1\": rpc error: code = NotFound desc = could not find container \"e8c03f2602d77c8ca3745e6c2244bff91717e346a9b95fbcc514b69a6b8800a1\": container with ID starting with e8c03f2602d77c8ca3745e6c2244bff91717e346a9b95fbcc514b69a6b8800a1 not found: ID does not exist" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.161258 4775 scope.go:117] "RemoveContainer" containerID="f9e6dd6ee748332259544493b056a08476ca7d32e51149aad4b5a5a844d829de" Jan 23 14:27:11 crc kubenswrapper[4775]: E0123 14:27:11.161611 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9e6dd6ee748332259544493b056a08476ca7d32e51149aad4b5a5a844d829de\": container with ID starting with f9e6dd6ee748332259544493b056a08476ca7d32e51149aad4b5a5a844d829de not found: ID does not exist" containerID="f9e6dd6ee748332259544493b056a08476ca7d32e51149aad4b5a5a844d829de" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.161669 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9e6dd6ee748332259544493b056a08476ca7d32e51149aad4b5a5a844d829de"} err="failed to get container status \"f9e6dd6ee748332259544493b056a08476ca7d32e51149aad4b5a5a844d829de\": rpc error: code = NotFound desc = could not find container \"f9e6dd6ee748332259544493b056a08476ca7d32e51149aad4b5a5a844d829de\": container with ID starting with f9e6dd6ee748332259544493b056a08476ca7d32e51149aad4b5a5a844d829de not found: ID does not exist" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.161696 4775 scope.go:117] "RemoveContainer" containerID="e8c03f2602d77c8ca3745e6c2244bff91717e346a9b95fbcc514b69a6b8800a1" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.163469 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.163468 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8c03f2602d77c8ca3745e6c2244bff91717e346a9b95fbcc514b69a6b8800a1"} err="failed to get container status \"e8c03f2602d77c8ca3745e6c2244bff91717e346a9b95fbcc514b69a6b8800a1\": rpc error: code = NotFound desc = could not find container \"e8c03f2602d77c8ca3745e6c2244bff91717e346a9b95fbcc514b69a6b8800a1\": container with ID starting with e8c03f2602d77c8ca3745e6c2244bff91717e346a9b95fbcc514b69a6b8800a1 not found: ID does not exist" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.163529 4775 scope.go:117] "RemoveContainer" containerID="f9e6dd6ee748332259544493b056a08476ca7d32e51149aad4b5a5a844d829de" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.165513 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9e6dd6ee748332259544493b056a08476ca7d32e51149aad4b5a5a844d829de"} err="failed to get container status \"f9e6dd6ee748332259544493b056a08476ca7d32e51149aad4b5a5a844d829de\": rpc error: code = NotFound desc = could not find container \"f9e6dd6ee748332259544493b056a08476ca7d32e51149aad4b5a5a844d829de\": container with ID starting with f9e6dd6ee748332259544493b056a08476ca7d32e51149aad4b5a5a844d829de not found: ID does not exist" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.178946 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.186378 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc956fab-2268-4862-a43b-57501989f228-logs\") pod \"nova-kuttl-api-0\" (UID: \"fc956fab-2268-4862-a43b-57501989f228\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.186461 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc956fab-2268-4862-a43b-57501989f228-config-data\") pod \"nova-kuttl-api-0\" (UID: \"fc956fab-2268-4862-a43b-57501989f228\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.186515 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x96t9\" (UniqueName: \"kubernetes.io/projected/fc956fab-2268-4862-a43b-57501989f228-kube-api-access-x96t9\") pod \"nova-kuttl-api-0\" (UID: \"fc956fab-2268-4862-a43b-57501989f228\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.199573 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.203916 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.204082 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.209278 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.217245 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.218777 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.220593 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.226727 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.232605 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.293852 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/354efd80-1bfe-4969-80e5-6ba275d34697-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"354efd80-1bfe-4969-80e5-6ba275d34697\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.293927 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x96t9\" (UniqueName: \"kubernetes.io/projected/fc956fab-2268-4862-a43b-57501989f228-kube-api-access-x96t9\") pod \"nova-kuttl-api-0\" (UID: \"fc956fab-2268-4862-a43b-57501989f228\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.293966 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/354efd80-1bfe-4969-80e5-6ba275d34697-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"354efd80-1bfe-4969-80e5-6ba275d34697\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.294029 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc956fab-2268-4862-a43b-57501989f228-logs\") pod \"nova-kuttl-api-0\" (UID: \"fc956fab-2268-4862-a43b-57501989f228\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.294055 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-297qf\" (UniqueName: \"kubernetes.io/projected/354efd80-1bfe-4969-80e5-6ba275d34697-kube-api-access-297qf\") pod \"nova-kuttl-metadata-0\" (UID: \"354efd80-1bfe-4969-80e5-6ba275d34697\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.294085 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60634ae6-20de-4c41-b4bf-0fceda1df7e5-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"60634ae6-20de-4c41-b4bf-0fceda1df7e5\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.294138 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc956fab-2268-4862-a43b-57501989f228-config-data\") pod \"nova-kuttl-api-0\" (UID: \"fc956fab-2268-4862-a43b-57501989f228\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.294167 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp587\" (UniqueName: \"kubernetes.io/projected/60634ae6-20de-4c41-b4bf-0fceda1df7e5-kube-api-access-pp587\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"60634ae6-20de-4c41-b4bf-0fceda1df7e5\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.294540 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc956fab-2268-4862-a43b-57501989f228-logs\") pod \"nova-kuttl-api-0\" (UID: \"fc956fab-2268-4862-a43b-57501989f228\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.297457 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc956fab-2268-4862-a43b-57501989f228-config-data\") pod \"nova-kuttl-api-0\" (UID: \"fc956fab-2268-4862-a43b-57501989f228\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.324890 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x96t9\" (UniqueName: \"kubernetes.io/projected/fc956fab-2268-4862-a43b-57501989f228-kube-api-access-x96t9\") pod \"nova-kuttl-api-0\" (UID: \"fc956fab-2268-4862-a43b-57501989f228\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.395521 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/354efd80-1bfe-4969-80e5-6ba275d34697-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"354efd80-1bfe-4969-80e5-6ba275d34697\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.395580 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/354efd80-1bfe-4969-80e5-6ba275d34697-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"354efd80-1bfe-4969-80e5-6ba275d34697\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.395628 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-297qf\" (UniqueName: \"kubernetes.io/projected/354efd80-1bfe-4969-80e5-6ba275d34697-kube-api-access-297qf\") pod \"nova-kuttl-metadata-0\" (UID: \"354efd80-1bfe-4969-80e5-6ba275d34697\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.395648 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60634ae6-20de-4c41-b4bf-0fceda1df7e5-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"60634ae6-20de-4c41-b4bf-0fceda1df7e5\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.395688 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp587\" (UniqueName: \"kubernetes.io/projected/60634ae6-20de-4c41-b4bf-0fceda1df7e5-kube-api-access-pp587\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"60634ae6-20de-4c41-b4bf-0fceda1df7e5\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.396099 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/354efd80-1bfe-4969-80e5-6ba275d34697-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"354efd80-1bfe-4969-80e5-6ba275d34697\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.401929 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60634ae6-20de-4c41-b4bf-0fceda1df7e5-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"60634ae6-20de-4c41-b4bf-0fceda1df7e5\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.402244 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/354efd80-1bfe-4969-80e5-6ba275d34697-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"354efd80-1bfe-4969-80e5-6ba275d34697\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.414690 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-297qf\" (UniqueName: \"kubernetes.io/projected/354efd80-1bfe-4969-80e5-6ba275d34697-kube-api-access-297qf\") pod \"nova-kuttl-metadata-0\" (UID: \"354efd80-1bfe-4969-80e5-6ba275d34697\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.419086 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp587\" (UniqueName: \"kubernetes.io/projected/60634ae6-20de-4c41-b4bf-0fceda1df7e5-kube-api-access-pp587\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"60634ae6-20de-4c41-b4bf-0fceda1df7e5\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.503818 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.514711 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.533737 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.725344 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ade3732b-4731-4318-a3ef-7c97825a71ed" path="/var/lib/kubelet/pods/ade3732b-4731-4318-a3ef-7c97825a71ed/volumes" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.726259 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2a774c2-1605-4329-bd98-fba72cd66171" path="/var/lib/kubelet/pods/d2a774c2-1605-4329-bd98-fba72cd66171/volumes" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.868060 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.903354 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbqdv\" (UniqueName: \"kubernetes.io/projected/cd12f6cf-eef0-4d55-8500-2d64ed9e7648-kube-api-access-bbqdv\") pod \"cd12f6cf-eef0-4d55-8500-2d64ed9e7648\" (UID: \"cd12f6cf-eef0-4d55-8500-2d64ed9e7648\") " Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.903556 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd12f6cf-eef0-4d55-8500-2d64ed9e7648-config-data\") pod \"cd12f6cf-eef0-4d55-8500-2d64ed9e7648\" (UID: \"cd12f6cf-eef0-4d55-8500-2d64ed9e7648\") " Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.908280 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd12f6cf-eef0-4d55-8500-2d64ed9e7648-kube-api-access-bbqdv" (OuterVolumeSpecName: "kube-api-access-bbqdv") pod "cd12f6cf-eef0-4d55-8500-2d64ed9e7648" (UID: "cd12f6cf-eef0-4d55-8500-2d64ed9e7648"). InnerVolumeSpecName "kube-api-access-bbqdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:27:11 crc kubenswrapper[4775]: I0123 14:27:11.937122 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd12f6cf-eef0-4d55-8500-2d64ed9e7648-config-data" (OuterVolumeSpecName: "config-data") pod "cd12f6cf-eef0-4d55-8500-2d64ed9e7648" (UID: "cd12f6cf-eef0-4d55-8500-2d64ed9e7648"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.006254 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd12f6cf-eef0-4d55-8500-2d64ed9e7648-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.006296 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbqdv\" (UniqueName: \"kubernetes.io/projected/cd12f6cf-eef0-4d55-8500-2d64ed9e7648-kube-api-access-bbqdv\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.018363 4775 generic.go:334] "Generic (PLEG): container finished" podID="cd12f6cf-eef0-4d55-8500-2d64ed9e7648" containerID="fbe0abac4e6cee8d6565dd2b6582cfcf62e3451343c85ba596566cd55c678a15" exitCode=0 Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.018427 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.018463 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"cd12f6cf-eef0-4d55-8500-2d64ed9e7648","Type":"ContainerDied","Data":"fbe0abac4e6cee8d6565dd2b6582cfcf62e3451343c85ba596566cd55c678a15"} Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.018519 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"cd12f6cf-eef0-4d55-8500-2d64ed9e7648","Type":"ContainerDied","Data":"c74b107de095453d19a75391e5aae3a435d1e6489cec783e11c3fb51cedba1a5"} Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.018640 4775 scope.go:117] "RemoveContainer" containerID="fbe0abac4e6cee8d6565dd2b6582cfcf62e3451343c85ba596566cd55c678a15" Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.045730 4775 scope.go:117] "RemoveContainer" containerID="fbe0abac4e6cee8d6565dd2b6582cfcf62e3451343c85ba596566cd55c678a15" Jan 23 14:27:12 crc kubenswrapper[4775]: E0123 14:27:12.046395 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbe0abac4e6cee8d6565dd2b6582cfcf62e3451343c85ba596566cd55c678a15\": container with ID starting with fbe0abac4e6cee8d6565dd2b6582cfcf62e3451343c85ba596566cd55c678a15 not found: ID does not exist" containerID="fbe0abac4e6cee8d6565dd2b6582cfcf62e3451343c85ba596566cd55c678a15" Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.046456 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbe0abac4e6cee8d6565dd2b6582cfcf62e3451343c85ba596566cd55c678a15"} err="failed to get container status \"fbe0abac4e6cee8d6565dd2b6582cfcf62e3451343c85ba596566cd55c678a15\": rpc error: code = NotFound desc = could not find container \"fbe0abac4e6cee8d6565dd2b6582cfcf62e3451343c85ba596566cd55c678a15\": container with ID starting with fbe0abac4e6cee8d6565dd2b6582cfcf62e3451343c85ba596566cd55c678a15 not found: ID does not exist" Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.052585 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.065924 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.074173 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:27:12 crc kubenswrapper[4775]: E0123 14:27:12.074567 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd12f6cf-eef0-4d55-8500-2d64ed9e7648" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.074587 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd12f6cf-eef0-4d55-8500-2d64ed9e7648" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.074846 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd12f6cf-eef0-4d55-8500-2d64ed9e7648" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.075466 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.079898 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.090447 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.108116 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87285e2b-3522-41c7-800d-1ae2d92cfb18-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"87285e2b-3522-41c7-800d-1ae2d92cfb18\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.108190 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2xkr\" (UniqueName: \"kubernetes.io/projected/87285e2b-3522-41c7-800d-1ae2d92cfb18-kube-api-access-q2xkr\") pod \"nova-kuttl-scheduler-0\" (UID: \"87285e2b-3522-41c7-800d-1ae2d92cfb18\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.117873 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:27:12 crc kubenswrapper[4775]: W0123 14:27:12.121396 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod354efd80_1bfe_4969_80e5_6ba275d34697.slice/crio-79b671456a4bd17f70f71ef9ffb5fde2cc5e2c39c0ec88732e5d9b28bcd5758e WatchSource:0}: Error finding container 79b671456a4bd17f70f71ef9ffb5fde2cc5e2c39c0ec88732e5d9b28bcd5758e: Status 404 returned error can't find the container with id 79b671456a4bd17f70f71ef9ffb5fde2cc5e2c39c0ec88732e5d9b28bcd5758e Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.132558 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.148015 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.212079 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2xkr\" (UniqueName: \"kubernetes.io/projected/87285e2b-3522-41c7-800d-1ae2d92cfb18-kube-api-access-q2xkr\") pod \"nova-kuttl-scheduler-0\" (UID: \"87285e2b-3522-41c7-800d-1ae2d92cfb18\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.212294 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87285e2b-3522-41c7-800d-1ae2d92cfb18-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"87285e2b-3522-41c7-800d-1ae2d92cfb18\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.218130 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87285e2b-3522-41c7-800d-1ae2d92cfb18-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"87285e2b-3522-41c7-800d-1ae2d92cfb18\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.228046 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2xkr\" (UniqueName: \"kubernetes.io/projected/87285e2b-3522-41c7-800d-1ae2d92cfb18-kube-api-access-q2xkr\") pod \"nova-kuttl-scheduler-0\" (UID: \"87285e2b-3522-41c7-800d-1ae2d92cfb18\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.390270 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:12 crc kubenswrapper[4775]: I0123 14:27:12.817511 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:27:12 crc kubenswrapper[4775]: W0123 14:27:12.817558 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87285e2b_3522_41c7_800d_1ae2d92cfb18.slice/crio-18cfa5baa8c39bacf83cd78d8fc431fe87c3aa67fd85875c08ab51ca5adc3b38 WatchSource:0}: Error finding container 18cfa5baa8c39bacf83cd78d8fc431fe87c3aa67fd85875c08ab51ca5adc3b38: Status 404 returned error can't find the container with id 18cfa5baa8c39bacf83cd78d8fc431fe87c3aa67fd85875c08ab51ca5adc3b38 Jan 23 14:27:13 crc kubenswrapper[4775]: I0123 14:27:13.034861 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"87285e2b-3522-41c7-800d-1ae2d92cfb18","Type":"ContainerStarted","Data":"67681e6112c11f53a2adc89b791004105371ee1f5459b827d6fb6e8173a6d561"} Jan 23 14:27:13 crc kubenswrapper[4775]: I0123 14:27:13.034906 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"87285e2b-3522-41c7-800d-1ae2d92cfb18","Type":"ContainerStarted","Data":"18cfa5baa8c39bacf83cd78d8fc431fe87c3aa67fd85875c08ab51ca5adc3b38"} Jan 23 14:27:13 crc kubenswrapper[4775]: I0123 14:27:13.036591 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"fc956fab-2268-4862-a43b-57501989f228","Type":"ContainerStarted","Data":"7ce102529b67e2d758cd642e6da6b4e6c8993a84cc80ca9ef54bbd72a6a57442"} Jan 23 14:27:13 crc kubenswrapper[4775]: I0123 14:27:13.036627 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"fc956fab-2268-4862-a43b-57501989f228","Type":"ContainerStarted","Data":"2606d56bf65ed7f3f2560a6c79a53c90ea6b5d02cf22d9083935f398801d9cde"} Jan 23 14:27:13 crc kubenswrapper[4775]: I0123 14:27:13.036644 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"fc956fab-2268-4862-a43b-57501989f228","Type":"ContainerStarted","Data":"56bd4495cf68ab5a756bf3af1e37ce78f529d5988a8c39d8d01e594b1f0ddb64"} Jan 23 14:27:13 crc kubenswrapper[4775]: I0123 14:27:13.038347 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"354efd80-1bfe-4969-80e5-6ba275d34697","Type":"ContainerStarted","Data":"e3e4205ae38d8b58d207903c4fa7cc9fc52aa806d9ca4a29ad913ecc3f6de1e1"} Jan 23 14:27:13 crc kubenswrapper[4775]: I0123 14:27:13.038382 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"354efd80-1bfe-4969-80e5-6ba275d34697","Type":"ContainerStarted","Data":"afca09300cfdb5918b2f0d30ea54502b9c93f1fc939524b37b0e74c3c92030c0"} Jan 23 14:27:13 crc kubenswrapper[4775]: I0123 14:27:13.038392 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"354efd80-1bfe-4969-80e5-6ba275d34697","Type":"ContainerStarted","Data":"79b671456a4bd17f70f71ef9ffb5fde2cc5e2c39c0ec88732e5d9b28bcd5758e"} Jan 23 14:27:13 crc kubenswrapper[4775]: I0123 14:27:13.040529 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"60634ae6-20de-4c41-b4bf-0fceda1df7e5","Type":"ContainerStarted","Data":"9417ed01719b61c92b4fcb5028120a0468f7bac0cd704d312ce33d3022cbce9e"} Jan 23 14:27:13 crc kubenswrapper[4775]: I0123 14:27:13.040692 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"60634ae6-20de-4c41-b4bf-0fceda1df7e5","Type":"ContainerStarted","Data":"d8f1f0f6e7f62499789debda728a77acf84ec6f7e20d7816daa6f9e8b8134f7b"} Jan 23 14:27:13 crc kubenswrapper[4775]: I0123 14:27:13.040857 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:27:13 crc kubenswrapper[4775]: I0123 14:27:13.055366 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=1.055349065 podStartE2EDuration="1.055349065s" podCreationTimestamp="2026-01-23 14:27:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:27:13.051334349 +0000 UTC m=+1380.046163089" watchObservedRunningTime="2026-01-23 14:27:13.055349065 +0000 UTC m=+1380.050177795" Jan 23 14:27:13 crc kubenswrapper[4775]: I0123 14:27:13.079156 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.079139302 podStartE2EDuration="2.079139302s" podCreationTimestamp="2026-01-23 14:27:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:27:13.07214694 +0000 UTC m=+1380.066975690" watchObservedRunningTime="2026-01-23 14:27:13.079139302 +0000 UTC m=+1380.073968042" Jan 23 14:27:13 crc kubenswrapper[4775]: I0123 14:27:13.096589 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=2.096570786 podStartE2EDuration="2.096570786s" podCreationTimestamp="2026-01-23 14:27:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:27:13.09568651 +0000 UTC m=+1380.090515260" watchObservedRunningTime="2026-01-23 14:27:13.096570786 +0000 UTC m=+1380.091399536" Jan 23 14:27:13 crc kubenswrapper[4775]: I0123 14:27:13.112912 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podStartSLOduration=2.112889698 podStartE2EDuration="2.112889698s" podCreationTimestamp="2026-01-23 14:27:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:27:13.110415916 +0000 UTC m=+1380.105244676" watchObservedRunningTime="2026-01-23 14:27:13.112889698 +0000 UTC m=+1380.107718468" Jan 23 14:27:13 crc kubenswrapper[4775]: I0123 14:27:13.727015 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd12f6cf-eef0-4d55-8500-2d64ed9e7648" path="/var/lib/kubelet/pods/cd12f6cf-eef0-4d55-8500-2d64ed9e7648/volumes" Jan 23 14:27:16 crc kubenswrapper[4775]: I0123 14:27:16.533931 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:16 crc kubenswrapper[4775]: I0123 14:27:16.534324 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:17 crc kubenswrapper[4775]: I0123 14:27:17.390776 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:21 crc kubenswrapper[4775]: I0123 14:27:21.504621 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:21 crc kubenswrapper[4775]: I0123 14:27:21.505225 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:21 crc kubenswrapper[4775]: I0123 14:27:21.534658 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:21 crc kubenswrapper[4775]: I0123 14:27:21.534753 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:21 crc kubenswrapper[4775]: I0123 14:27:21.567750 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:27:22 crc kubenswrapper[4775]: I0123 14:27:22.164768 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-rwhvl"] Jan 23 14:27:22 crc kubenswrapper[4775]: I0123 14:27:22.165856 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-rwhvl" Jan 23 14:27:22 crc kubenswrapper[4775]: I0123 14:27:22.168798 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-config-data" Jan 23 14:27:22 crc kubenswrapper[4775]: I0123 14:27:22.168998 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-scripts" Jan 23 14:27:22 crc kubenswrapper[4775]: I0123 14:27:22.204967 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-rwhvl"] Jan 23 14:27:22 crc kubenswrapper[4775]: I0123 14:27:22.285298 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e6ea152-3ef9-4ed3-85c8-b6798fa8d084-config-data\") pod \"nova-kuttl-cell1-cell-mapping-rwhvl\" (UID: \"5e6ea152-3ef9-4ed3-85c8-b6798fa8d084\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-rwhvl" Jan 23 14:27:22 crc kubenswrapper[4775]: I0123 14:27:22.285415 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e6ea152-3ef9-4ed3-85c8-b6798fa8d084-scripts\") pod \"nova-kuttl-cell1-cell-mapping-rwhvl\" (UID: \"5e6ea152-3ef9-4ed3-85c8-b6798fa8d084\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-rwhvl" Jan 23 14:27:22 crc kubenswrapper[4775]: I0123 14:27:22.285449 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psfd8\" (UniqueName: \"kubernetes.io/projected/5e6ea152-3ef9-4ed3-85c8-b6798fa8d084-kube-api-access-psfd8\") pod \"nova-kuttl-cell1-cell-mapping-rwhvl\" (UID: \"5e6ea152-3ef9-4ed3-85c8-b6798fa8d084\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-rwhvl" Jan 23 14:27:22 crc kubenswrapper[4775]: I0123 14:27:22.387486 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e6ea152-3ef9-4ed3-85c8-b6798fa8d084-scripts\") pod \"nova-kuttl-cell1-cell-mapping-rwhvl\" (UID: \"5e6ea152-3ef9-4ed3-85c8-b6798fa8d084\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-rwhvl" Jan 23 14:27:22 crc kubenswrapper[4775]: I0123 14:27:22.387592 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psfd8\" (UniqueName: \"kubernetes.io/projected/5e6ea152-3ef9-4ed3-85c8-b6798fa8d084-kube-api-access-psfd8\") pod \"nova-kuttl-cell1-cell-mapping-rwhvl\" (UID: \"5e6ea152-3ef9-4ed3-85c8-b6798fa8d084\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-rwhvl" Jan 23 14:27:22 crc kubenswrapper[4775]: I0123 14:27:22.387734 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e6ea152-3ef9-4ed3-85c8-b6798fa8d084-config-data\") pod \"nova-kuttl-cell1-cell-mapping-rwhvl\" (UID: \"5e6ea152-3ef9-4ed3-85c8-b6798fa8d084\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-rwhvl" Jan 23 14:27:22 crc kubenswrapper[4775]: I0123 14:27:22.391082 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:22 crc kubenswrapper[4775]: I0123 14:27:22.395787 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e6ea152-3ef9-4ed3-85c8-b6798fa8d084-scripts\") pod \"nova-kuttl-cell1-cell-mapping-rwhvl\" (UID: \"5e6ea152-3ef9-4ed3-85c8-b6798fa8d084\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-rwhvl" Jan 23 14:27:22 crc kubenswrapper[4775]: I0123 14:27:22.401378 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e6ea152-3ef9-4ed3-85c8-b6798fa8d084-config-data\") pod \"nova-kuttl-cell1-cell-mapping-rwhvl\" (UID: \"5e6ea152-3ef9-4ed3-85c8-b6798fa8d084\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-rwhvl" Jan 23 14:27:22 crc kubenswrapper[4775]: I0123 14:27:22.428338 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psfd8\" (UniqueName: \"kubernetes.io/projected/5e6ea152-3ef9-4ed3-85c8-b6798fa8d084-kube-api-access-psfd8\") pod \"nova-kuttl-cell1-cell-mapping-rwhvl\" (UID: \"5e6ea152-3ef9-4ed3-85c8-b6798fa8d084\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-rwhvl" Jan 23 14:27:22 crc kubenswrapper[4775]: I0123 14:27:22.441492 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:22 crc kubenswrapper[4775]: I0123 14:27:22.493214 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-rwhvl" Jan 23 14:27:22 crc kubenswrapper[4775]: I0123 14:27:22.583760 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:22 crc kubenswrapper[4775]: I0123 14:27:22.675575 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="fc956fab-2268-4862-a43b-57501989f228" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.131:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:27:22 crc kubenswrapper[4775]: I0123 14:27:22.676658 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="354efd80-1bfe-4969-80e5-6ba275d34697" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.133:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:27:22 crc kubenswrapper[4775]: I0123 14:27:22.676727 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="fc956fab-2268-4862-a43b-57501989f228" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.131:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:27:22 crc kubenswrapper[4775]: I0123 14:27:22.676789 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="354efd80-1bfe-4969-80e5-6ba275d34697" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.133:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:27:23 crc kubenswrapper[4775]: I0123 14:27:23.015724 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-rwhvl"] Jan 23 14:27:23 crc kubenswrapper[4775]: I0123 14:27:23.558520 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-rwhvl" event={"ID":"5e6ea152-3ef9-4ed3-85c8-b6798fa8d084","Type":"ContainerStarted","Data":"711f68f5e6e9927f1844635ae91ffaae80eaf390a5a10c418f40e975d1662c3b"} Jan 23 14:27:23 crc kubenswrapper[4775]: I0123 14:27:23.558945 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-rwhvl" event={"ID":"5e6ea152-3ef9-4ed3-85c8-b6798fa8d084","Type":"ContainerStarted","Data":"c765600145c9d483f1c3d5fdeaac06e44af30c8da8108e113ebfc8ab5678c66c"} Jan 23 14:27:27 crc kubenswrapper[4775]: I0123 14:27:27.601975 4775 generic.go:334] "Generic (PLEG): container finished" podID="5e6ea152-3ef9-4ed3-85c8-b6798fa8d084" containerID="711f68f5e6e9927f1844635ae91ffaae80eaf390a5a10c418f40e975d1662c3b" exitCode=0 Jan 23 14:27:27 crc kubenswrapper[4775]: I0123 14:27:27.602090 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-rwhvl" event={"ID":"5e6ea152-3ef9-4ed3-85c8-b6798fa8d084","Type":"ContainerDied","Data":"711f68f5e6e9927f1844635ae91ffaae80eaf390a5a10c418f40e975d1662c3b"} Jan 23 14:27:29 crc kubenswrapper[4775]: I0123 14:27:29.057073 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-rwhvl" Jan 23 14:27:29 crc kubenswrapper[4775]: I0123 14:27:29.128711 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psfd8\" (UniqueName: \"kubernetes.io/projected/5e6ea152-3ef9-4ed3-85c8-b6798fa8d084-kube-api-access-psfd8\") pod \"5e6ea152-3ef9-4ed3-85c8-b6798fa8d084\" (UID: \"5e6ea152-3ef9-4ed3-85c8-b6798fa8d084\") " Jan 23 14:27:29 crc kubenswrapper[4775]: I0123 14:27:29.128753 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e6ea152-3ef9-4ed3-85c8-b6798fa8d084-scripts\") pod \"5e6ea152-3ef9-4ed3-85c8-b6798fa8d084\" (UID: \"5e6ea152-3ef9-4ed3-85c8-b6798fa8d084\") " Jan 23 14:27:29 crc kubenswrapper[4775]: I0123 14:27:29.128826 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e6ea152-3ef9-4ed3-85c8-b6798fa8d084-config-data\") pod \"5e6ea152-3ef9-4ed3-85c8-b6798fa8d084\" (UID: \"5e6ea152-3ef9-4ed3-85c8-b6798fa8d084\") " Jan 23 14:27:29 crc kubenswrapper[4775]: I0123 14:27:29.134408 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e6ea152-3ef9-4ed3-85c8-b6798fa8d084-kube-api-access-psfd8" (OuterVolumeSpecName: "kube-api-access-psfd8") pod "5e6ea152-3ef9-4ed3-85c8-b6798fa8d084" (UID: "5e6ea152-3ef9-4ed3-85c8-b6798fa8d084"). InnerVolumeSpecName "kube-api-access-psfd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:27:29 crc kubenswrapper[4775]: I0123 14:27:29.135908 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e6ea152-3ef9-4ed3-85c8-b6798fa8d084-scripts" (OuterVolumeSpecName: "scripts") pod "5e6ea152-3ef9-4ed3-85c8-b6798fa8d084" (UID: "5e6ea152-3ef9-4ed3-85c8-b6798fa8d084"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:27:29 crc kubenswrapper[4775]: I0123 14:27:29.169463 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e6ea152-3ef9-4ed3-85c8-b6798fa8d084-config-data" (OuterVolumeSpecName: "config-data") pod "5e6ea152-3ef9-4ed3-85c8-b6798fa8d084" (UID: "5e6ea152-3ef9-4ed3-85c8-b6798fa8d084"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:27:29 crc kubenswrapper[4775]: I0123 14:27:29.230452 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psfd8\" (UniqueName: \"kubernetes.io/projected/5e6ea152-3ef9-4ed3-85c8-b6798fa8d084-kube-api-access-psfd8\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:29 crc kubenswrapper[4775]: I0123 14:27:29.230485 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e6ea152-3ef9-4ed3-85c8-b6798fa8d084-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:29 crc kubenswrapper[4775]: I0123 14:27:29.230494 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e6ea152-3ef9-4ed3-85c8-b6798fa8d084-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:29 crc kubenswrapper[4775]: I0123 14:27:29.627258 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-rwhvl" event={"ID":"5e6ea152-3ef9-4ed3-85c8-b6798fa8d084","Type":"ContainerDied","Data":"c765600145c9d483f1c3d5fdeaac06e44af30c8da8108e113ebfc8ab5678c66c"} Jan 23 14:27:29 crc kubenswrapper[4775]: I0123 14:27:29.627628 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c765600145c9d483f1c3d5fdeaac06e44af30c8da8108e113ebfc8ab5678c66c" Jan 23 14:27:29 crc kubenswrapper[4775]: I0123 14:27:29.627771 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-rwhvl" Jan 23 14:27:29 crc kubenswrapper[4775]: I0123 14:27:29.864200 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:27:29 crc kubenswrapper[4775]: I0123 14:27:29.864544 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="fc956fab-2268-4862-a43b-57501989f228" containerName="nova-kuttl-api-log" containerID="cri-o://2606d56bf65ed7f3f2560a6c79a53c90ea6b5d02cf22d9083935f398801d9cde" gracePeriod=30 Jan 23 14:27:29 crc kubenswrapper[4775]: I0123 14:27:29.864743 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="fc956fab-2268-4862-a43b-57501989f228" containerName="nova-kuttl-api-api" containerID="cri-o://7ce102529b67e2d758cd642e6da6b4e6c8993a84cc80ca9ef54bbd72a6a57442" gracePeriod=30 Jan 23 14:27:29 crc kubenswrapper[4775]: I0123 14:27:29.935359 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:27:29 crc kubenswrapper[4775]: I0123 14:27:29.935693 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="87285e2b-3522-41c7-800d-1ae2d92cfb18" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://67681e6112c11f53a2adc89b791004105371ee1f5459b827d6fb6e8173a6d561" gracePeriod=30 Jan 23 14:27:30 crc kubenswrapper[4775]: I0123 14:27:30.028451 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:27:30 crc kubenswrapper[4775]: I0123 14:27:30.028778 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="354efd80-1bfe-4969-80e5-6ba275d34697" containerName="nova-kuttl-metadata-log" containerID="cri-o://afca09300cfdb5918b2f0d30ea54502b9c93f1fc939524b37b0e74c3c92030c0" gracePeriod=30 Jan 23 14:27:30 crc kubenswrapper[4775]: I0123 14:27:30.028917 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="354efd80-1bfe-4969-80e5-6ba275d34697" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://e3e4205ae38d8b58d207903c4fa7cc9fc52aa806d9ca4a29ad913ecc3f6de1e1" gracePeriod=30 Jan 23 14:27:30 crc kubenswrapper[4775]: I0123 14:27:30.639246 4775 generic.go:334] "Generic (PLEG): container finished" podID="fc956fab-2268-4862-a43b-57501989f228" containerID="2606d56bf65ed7f3f2560a6c79a53c90ea6b5d02cf22d9083935f398801d9cde" exitCode=143 Jan 23 14:27:30 crc kubenswrapper[4775]: I0123 14:27:30.639344 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"fc956fab-2268-4862-a43b-57501989f228","Type":"ContainerDied","Data":"2606d56bf65ed7f3f2560a6c79a53c90ea6b5d02cf22d9083935f398801d9cde"} Jan 23 14:27:30 crc kubenswrapper[4775]: I0123 14:27:30.641907 4775 generic.go:334] "Generic (PLEG): container finished" podID="354efd80-1bfe-4969-80e5-6ba275d34697" containerID="afca09300cfdb5918b2f0d30ea54502b9c93f1fc939524b37b0e74c3c92030c0" exitCode=143 Jan 23 14:27:30 crc kubenswrapper[4775]: I0123 14:27:30.641954 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"354efd80-1bfe-4969-80e5-6ba275d34697","Type":"ContainerDied","Data":"afca09300cfdb5918b2f0d30ea54502b9c93f1fc939524b37b0e74c3c92030c0"} Jan 23 14:27:32 crc kubenswrapper[4775]: E0123 14:27:32.393459 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="67681e6112c11f53a2adc89b791004105371ee1f5459b827d6fb6e8173a6d561" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 14:27:32 crc kubenswrapper[4775]: E0123 14:27:32.396473 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="67681e6112c11f53a2adc89b791004105371ee1f5459b827d6fb6e8173a6d561" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 14:27:32 crc kubenswrapper[4775]: E0123 14:27:32.398780 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="67681e6112c11f53a2adc89b791004105371ee1f5459b827d6fb6e8173a6d561" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 14:27:32 crc kubenswrapper[4775]: E0123 14:27:32.398887 4775 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="87285e2b-3522-41c7-800d-1ae2d92cfb18" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.474120 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.592742 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.609771 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc956fab-2268-4862-a43b-57501989f228-logs\") pod \"fc956fab-2268-4862-a43b-57501989f228\" (UID: \"fc956fab-2268-4862-a43b-57501989f228\") " Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.609848 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x96t9\" (UniqueName: \"kubernetes.io/projected/fc956fab-2268-4862-a43b-57501989f228-kube-api-access-x96t9\") pod \"fc956fab-2268-4862-a43b-57501989f228\" (UID: \"fc956fab-2268-4862-a43b-57501989f228\") " Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.609999 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc956fab-2268-4862-a43b-57501989f228-config-data\") pod \"fc956fab-2268-4862-a43b-57501989f228\" (UID: \"fc956fab-2268-4862-a43b-57501989f228\") " Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.610362 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc956fab-2268-4862-a43b-57501989f228-logs" (OuterVolumeSpecName: "logs") pod "fc956fab-2268-4862-a43b-57501989f228" (UID: "fc956fab-2268-4862-a43b-57501989f228"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.611844 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc956fab-2268-4862-a43b-57501989f228-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.620006 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc956fab-2268-4862-a43b-57501989f228-kube-api-access-x96t9" (OuterVolumeSpecName: "kube-api-access-x96t9") pod "fc956fab-2268-4862-a43b-57501989f228" (UID: "fc956fab-2268-4862-a43b-57501989f228"). InnerVolumeSpecName "kube-api-access-x96t9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.635209 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc956fab-2268-4862-a43b-57501989f228-config-data" (OuterVolumeSpecName: "config-data") pod "fc956fab-2268-4862-a43b-57501989f228" (UID: "fc956fab-2268-4862-a43b-57501989f228"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.683097 4775 generic.go:334] "Generic (PLEG): container finished" podID="fc956fab-2268-4862-a43b-57501989f228" containerID="7ce102529b67e2d758cd642e6da6b4e6c8993a84cc80ca9ef54bbd72a6a57442" exitCode=0 Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.683195 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"fc956fab-2268-4862-a43b-57501989f228","Type":"ContainerDied","Data":"7ce102529b67e2d758cd642e6da6b4e6c8993a84cc80ca9ef54bbd72a6a57442"} Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.683201 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.683226 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"fc956fab-2268-4862-a43b-57501989f228","Type":"ContainerDied","Data":"56bd4495cf68ab5a756bf3af1e37ce78f529d5988a8c39d8d01e594b1f0ddb64"} Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.683243 4775 scope.go:117] "RemoveContainer" containerID="7ce102529b67e2d758cd642e6da6b4e6c8993a84cc80ca9ef54bbd72a6a57442" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.686362 4775 generic.go:334] "Generic (PLEG): container finished" podID="354efd80-1bfe-4969-80e5-6ba275d34697" containerID="e3e4205ae38d8b58d207903c4fa7cc9fc52aa806d9ca4a29ad913ecc3f6de1e1" exitCode=0 Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.686396 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.686574 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"354efd80-1bfe-4969-80e5-6ba275d34697","Type":"ContainerDied","Data":"e3e4205ae38d8b58d207903c4fa7cc9fc52aa806d9ca4a29ad913ecc3f6de1e1"} Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.686615 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"354efd80-1bfe-4969-80e5-6ba275d34697","Type":"ContainerDied","Data":"79b671456a4bd17f70f71ef9ffb5fde2cc5e2c39c0ec88732e5d9b28bcd5758e"} Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.712727 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/354efd80-1bfe-4969-80e5-6ba275d34697-logs\") pod \"354efd80-1bfe-4969-80e5-6ba275d34697\" (UID: \"354efd80-1bfe-4969-80e5-6ba275d34697\") " Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.713044 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/354efd80-1bfe-4969-80e5-6ba275d34697-config-data\") pod \"354efd80-1bfe-4969-80e5-6ba275d34697\" (UID: \"354efd80-1bfe-4969-80e5-6ba275d34697\") " Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.713209 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-297qf\" (UniqueName: \"kubernetes.io/projected/354efd80-1bfe-4969-80e5-6ba275d34697-kube-api-access-297qf\") pod \"354efd80-1bfe-4969-80e5-6ba275d34697\" (UID: \"354efd80-1bfe-4969-80e5-6ba275d34697\") " Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.713566 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x96t9\" (UniqueName: \"kubernetes.io/projected/fc956fab-2268-4862-a43b-57501989f228-kube-api-access-x96t9\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.713667 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc956fab-2268-4862-a43b-57501989f228-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.714537 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/354efd80-1bfe-4969-80e5-6ba275d34697-logs" (OuterVolumeSpecName: "logs") pod "354efd80-1bfe-4969-80e5-6ba275d34697" (UID: "354efd80-1bfe-4969-80e5-6ba275d34697"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.718875 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/354efd80-1bfe-4969-80e5-6ba275d34697-kube-api-access-297qf" (OuterVolumeSpecName: "kube-api-access-297qf") pod "354efd80-1bfe-4969-80e5-6ba275d34697" (UID: "354efd80-1bfe-4969-80e5-6ba275d34697"). InnerVolumeSpecName "kube-api-access-297qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.737554 4775 scope.go:117] "RemoveContainer" containerID="2606d56bf65ed7f3f2560a6c79a53c90ea6b5d02cf22d9083935f398801d9cde" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.739175 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/354efd80-1bfe-4969-80e5-6ba275d34697-config-data" (OuterVolumeSpecName: "config-data") pod "354efd80-1bfe-4969-80e5-6ba275d34697" (UID: "354efd80-1bfe-4969-80e5-6ba275d34697"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.741216 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.759059 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.759119 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:27:33 crc kubenswrapper[4775]: E0123 14:27:33.759507 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e6ea152-3ef9-4ed3-85c8-b6798fa8d084" containerName="nova-manage" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.759524 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e6ea152-3ef9-4ed3-85c8-b6798fa8d084" containerName="nova-manage" Jan 23 14:27:33 crc kubenswrapper[4775]: E0123 14:27:33.759547 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354efd80-1bfe-4969-80e5-6ba275d34697" containerName="nova-kuttl-metadata-log" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.759558 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="354efd80-1bfe-4969-80e5-6ba275d34697" containerName="nova-kuttl-metadata-log" Jan 23 14:27:33 crc kubenswrapper[4775]: E0123 14:27:33.759570 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc956fab-2268-4862-a43b-57501989f228" containerName="nova-kuttl-api-api" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.759578 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc956fab-2268-4862-a43b-57501989f228" containerName="nova-kuttl-api-api" Jan 23 14:27:33 crc kubenswrapper[4775]: E0123 14:27:33.759588 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354efd80-1bfe-4969-80e5-6ba275d34697" containerName="nova-kuttl-metadata-metadata" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.759598 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="354efd80-1bfe-4969-80e5-6ba275d34697" containerName="nova-kuttl-metadata-metadata" Jan 23 14:27:33 crc kubenswrapper[4775]: E0123 14:27:33.759615 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc956fab-2268-4862-a43b-57501989f228" containerName="nova-kuttl-api-log" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.759623 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc956fab-2268-4862-a43b-57501989f228" containerName="nova-kuttl-api-log" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.759795 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc956fab-2268-4862-a43b-57501989f228" containerName="nova-kuttl-api-log" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.759838 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="354efd80-1bfe-4969-80e5-6ba275d34697" containerName="nova-kuttl-metadata-log" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.759863 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="354efd80-1bfe-4969-80e5-6ba275d34697" containerName="nova-kuttl-metadata-metadata" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.759881 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e6ea152-3ef9-4ed3-85c8-b6798fa8d084" containerName="nova-manage" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.759904 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc956fab-2268-4862-a43b-57501989f228" containerName="nova-kuttl-api-api" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.761248 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.763051 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.787449 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.810016 4775 scope.go:117] "RemoveContainer" containerID="7ce102529b67e2d758cd642e6da6b4e6c8993a84cc80ca9ef54bbd72a6a57442" Jan 23 14:27:33 crc kubenswrapper[4775]: E0123 14:27:33.810459 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ce102529b67e2d758cd642e6da6b4e6c8993a84cc80ca9ef54bbd72a6a57442\": container with ID starting with 7ce102529b67e2d758cd642e6da6b4e6c8993a84cc80ca9ef54bbd72a6a57442 not found: ID does not exist" containerID="7ce102529b67e2d758cd642e6da6b4e6c8993a84cc80ca9ef54bbd72a6a57442" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.810496 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ce102529b67e2d758cd642e6da6b4e6c8993a84cc80ca9ef54bbd72a6a57442"} err="failed to get container status \"7ce102529b67e2d758cd642e6da6b4e6c8993a84cc80ca9ef54bbd72a6a57442\": rpc error: code = NotFound desc = could not find container \"7ce102529b67e2d758cd642e6da6b4e6c8993a84cc80ca9ef54bbd72a6a57442\": container with ID starting with 7ce102529b67e2d758cd642e6da6b4e6c8993a84cc80ca9ef54bbd72a6a57442 not found: ID does not exist" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.810522 4775 scope.go:117] "RemoveContainer" containerID="2606d56bf65ed7f3f2560a6c79a53c90ea6b5d02cf22d9083935f398801d9cde" Jan 23 14:27:33 crc kubenswrapper[4775]: E0123 14:27:33.811258 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2606d56bf65ed7f3f2560a6c79a53c90ea6b5d02cf22d9083935f398801d9cde\": container with ID starting with 2606d56bf65ed7f3f2560a6c79a53c90ea6b5d02cf22d9083935f398801d9cde not found: ID does not exist" containerID="2606d56bf65ed7f3f2560a6c79a53c90ea6b5d02cf22d9083935f398801d9cde" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.811294 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2606d56bf65ed7f3f2560a6c79a53c90ea6b5d02cf22d9083935f398801d9cde"} err="failed to get container status \"2606d56bf65ed7f3f2560a6c79a53c90ea6b5d02cf22d9083935f398801d9cde\": rpc error: code = NotFound desc = could not find container \"2606d56bf65ed7f3f2560a6c79a53c90ea6b5d02cf22d9083935f398801d9cde\": container with ID starting with 2606d56bf65ed7f3f2560a6c79a53c90ea6b5d02cf22d9083935f398801d9cde not found: ID does not exist" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.811322 4775 scope.go:117] "RemoveContainer" containerID="e3e4205ae38d8b58d207903c4fa7cc9fc52aa806d9ca4a29ad913ecc3f6de1e1" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.815162 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40c54c9a-246a-4dab-af73-779d4d8539e4-config-data\") pod \"nova-kuttl-api-0\" (UID: \"40c54c9a-246a-4dab-af73-779d4d8539e4\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.815335 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40c54c9a-246a-4dab-af73-779d4d8539e4-logs\") pod \"nova-kuttl-api-0\" (UID: \"40c54c9a-246a-4dab-af73-779d4d8539e4\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.816166 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgxwb\" (UniqueName: \"kubernetes.io/projected/40c54c9a-246a-4dab-af73-779d4d8539e4-kube-api-access-vgxwb\") pod \"nova-kuttl-api-0\" (UID: \"40c54c9a-246a-4dab-af73-779d4d8539e4\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.816438 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-297qf\" (UniqueName: \"kubernetes.io/projected/354efd80-1bfe-4969-80e5-6ba275d34697-kube-api-access-297qf\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.816504 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/354efd80-1bfe-4969-80e5-6ba275d34697-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.816555 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/354efd80-1bfe-4969-80e5-6ba275d34697-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.832247 4775 scope.go:117] "RemoveContainer" containerID="afca09300cfdb5918b2f0d30ea54502b9c93f1fc939524b37b0e74c3c92030c0" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.854079 4775 scope.go:117] "RemoveContainer" containerID="e3e4205ae38d8b58d207903c4fa7cc9fc52aa806d9ca4a29ad913ecc3f6de1e1" Jan 23 14:27:33 crc kubenswrapper[4775]: E0123 14:27:33.856009 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3e4205ae38d8b58d207903c4fa7cc9fc52aa806d9ca4a29ad913ecc3f6de1e1\": container with ID starting with e3e4205ae38d8b58d207903c4fa7cc9fc52aa806d9ca4a29ad913ecc3f6de1e1 not found: ID does not exist" containerID="e3e4205ae38d8b58d207903c4fa7cc9fc52aa806d9ca4a29ad913ecc3f6de1e1" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.856073 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3e4205ae38d8b58d207903c4fa7cc9fc52aa806d9ca4a29ad913ecc3f6de1e1"} err="failed to get container status \"e3e4205ae38d8b58d207903c4fa7cc9fc52aa806d9ca4a29ad913ecc3f6de1e1\": rpc error: code = NotFound desc = could not find container \"e3e4205ae38d8b58d207903c4fa7cc9fc52aa806d9ca4a29ad913ecc3f6de1e1\": container with ID starting with e3e4205ae38d8b58d207903c4fa7cc9fc52aa806d9ca4a29ad913ecc3f6de1e1 not found: ID does not exist" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.856108 4775 scope.go:117] "RemoveContainer" containerID="afca09300cfdb5918b2f0d30ea54502b9c93f1fc939524b37b0e74c3c92030c0" Jan 23 14:27:33 crc kubenswrapper[4775]: E0123 14:27:33.857060 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afca09300cfdb5918b2f0d30ea54502b9c93f1fc939524b37b0e74c3c92030c0\": container with ID starting with afca09300cfdb5918b2f0d30ea54502b9c93f1fc939524b37b0e74c3c92030c0 not found: ID does not exist" containerID="afca09300cfdb5918b2f0d30ea54502b9c93f1fc939524b37b0e74c3c92030c0" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.857106 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afca09300cfdb5918b2f0d30ea54502b9c93f1fc939524b37b0e74c3c92030c0"} err="failed to get container status \"afca09300cfdb5918b2f0d30ea54502b9c93f1fc939524b37b0e74c3c92030c0\": rpc error: code = NotFound desc = could not find container \"afca09300cfdb5918b2f0d30ea54502b9c93f1fc939524b37b0e74c3c92030c0\": container with ID starting with afca09300cfdb5918b2f0d30ea54502b9c93f1fc939524b37b0e74c3c92030c0 not found: ID does not exist" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.917979 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40c54c9a-246a-4dab-af73-779d4d8539e4-logs\") pod \"nova-kuttl-api-0\" (UID: \"40c54c9a-246a-4dab-af73-779d4d8539e4\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.918028 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgxwb\" (UniqueName: \"kubernetes.io/projected/40c54c9a-246a-4dab-af73-779d4d8539e4-kube-api-access-vgxwb\") pod \"nova-kuttl-api-0\" (UID: \"40c54c9a-246a-4dab-af73-779d4d8539e4\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.918089 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40c54c9a-246a-4dab-af73-779d4d8539e4-config-data\") pod \"nova-kuttl-api-0\" (UID: \"40c54c9a-246a-4dab-af73-779d4d8539e4\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.918346 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40c54c9a-246a-4dab-af73-779d4d8539e4-logs\") pod \"nova-kuttl-api-0\" (UID: \"40c54c9a-246a-4dab-af73-779d4d8539e4\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.923327 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40c54c9a-246a-4dab-af73-779d4d8539e4-config-data\") pod \"nova-kuttl-api-0\" (UID: \"40c54c9a-246a-4dab-af73-779d4d8539e4\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:33 crc kubenswrapper[4775]: I0123 14:27:33.950531 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgxwb\" (UniqueName: \"kubernetes.io/projected/40c54c9a-246a-4dab-af73-779d4d8539e4-kube-api-access-vgxwb\") pod \"nova-kuttl-api-0\" (UID: \"40c54c9a-246a-4dab-af73-779d4d8539e4\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.014843 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.024980 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.047558 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.048981 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.053117 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.090666 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.115515 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.121291 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxwzv\" (UniqueName: \"kubernetes.io/projected/1b50fc49-3582-416c-9b89-0de07e733931-kube-api-access-nxwzv\") pod \"nova-kuttl-metadata-0\" (UID: \"1b50fc49-3582-416c-9b89-0de07e733931\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.121361 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b50fc49-3582-416c-9b89-0de07e733931-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"1b50fc49-3582-416c-9b89-0de07e733931\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.121434 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b50fc49-3582-416c-9b89-0de07e733931-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"1b50fc49-3582-416c-9b89-0de07e733931\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.223348 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxwzv\" (UniqueName: \"kubernetes.io/projected/1b50fc49-3582-416c-9b89-0de07e733931-kube-api-access-nxwzv\") pod \"nova-kuttl-metadata-0\" (UID: \"1b50fc49-3582-416c-9b89-0de07e733931\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.223453 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b50fc49-3582-416c-9b89-0de07e733931-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"1b50fc49-3582-416c-9b89-0de07e733931\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.223528 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b50fc49-3582-416c-9b89-0de07e733931-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"1b50fc49-3582-416c-9b89-0de07e733931\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.224171 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b50fc49-3582-416c-9b89-0de07e733931-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"1b50fc49-3582-416c-9b89-0de07e733931\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.229527 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b50fc49-3582-416c-9b89-0de07e733931-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"1b50fc49-3582-416c-9b89-0de07e733931\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.281429 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxwzv\" (UniqueName: \"kubernetes.io/projected/1b50fc49-3582-416c-9b89-0de07e733931-kube-api-access-nxwzv\") pod \"nova-kuttl-metadata-0\" (UID: \"1b50fc49-3582-416c-9b89-0de07e733931\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.481412 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:34 crc kubenswrapper[4775]: W0123 14:27:34.617739 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40c54c9a_246a_4dab_af73_779d4d8539e4.slice/crio-cc067c426dd03351b5a8a8591d3c2c83477c0b5d51ea784970cfb53f7e6d267e WatchSource:0}: Error finding container cc067c426dd03351b5a8a8591d3c2c83477c0b5d51ea784970cfb53f7e6d267e: Status 404 returned error can't find the container with id cc067c426dd03351b5a8a8591d3c2c83477c0b5d51ea784970cfb53f7e6d267e Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.618708 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.715614 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"40c54c9a-246a-4dab-af73-779d4d8539e4","Type":"ContainerStarted","Data":"cc067c426dd03351b5a8a8591d3c2c83477c0b5d51ea784970cfb53f7e6d267e"} Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.719552 4775 generic.go:334] "Generic (PLEG): container finished" podID="87285e2b-3522-41c7-800d-1ae2d92cfb18" containerID="67681e6112c11f53a2adc89b791004105371ee1f5459b827d6fb6e8173a6d561" exitCode=0 Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.719556 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"87285e2b-3522-41c7-800d-1ae2d92cfb18","Type":"ContainerDied","Data":"67681e6112c11f53a2adc89b791004105371ee1f5459b827d6fb6e8173a6d561"} Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.841734 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.947211 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.947756 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87285e2b-3522-41c7-800d-1ae2d92cfb18-config-data\") pod \"87285e2b-3522-41c7-800d-1ae2d92cfb18\" (UID: \"87285e2b-3522-41c7-800d-1ae2d92cfb18\") " Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.947851 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2xkr\" (UniqueName: \"kubernetes.io/projected/87285e2b-3522-41c7-800d-1ae2d92cfb18-kube-api-access-q2xkr\") pod \"87285e2b-3522-41c7-800d-1ae2d92cfb18\" (UID: \"87285e2b-3522-41c7-800d-1ae2d92cfb18\") " Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.951057 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87285e2b-3522-41c7-800d-1ae2d92cfb18-kube-api-access-q2xkr" (OuterVolumeSpecName: "kube-api-access-q2xkr") pod "87285e2b-3522-41c7-800d-1ae2d92cfb18" (UID: "87285e2b-3522-41c7-800d-1ae2d92cfb18"). InnerVolumeSpecName "kube-api-access-q2xkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:27:34 crc kubenswrapper[4775]: I0123 14:27:34.968068 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87285e2b-3522-41c7-800d-1ae2d92cfb18-config-data" (OuterVolumeSpecName: "config-data") pod "87285e2b-3522-41c7-800d-1ae2d92cfb18" (UID: "87285e2b-3522-41c7-800d-1ae2d92cfb18"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.049086 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87285e2b-3522-41c7-800d-1ae2d92cfb18-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.049122 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2xkr\" (UniqueName: \"kubernetes.io/projected/87285e2b-3522-41c7-800d-1ae2d92cfb18-kube-api-access-q2xkr\") on node \"crc\" DevicePath \"\"" Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.731925 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="354efd80-1bfe-4969-80e5-6ba275d34697" path="/var/lib/kubelet/pods/354efd80-1bfe-4969-80e5-6ba275d34697/volumes" Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.733250 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc956fab-2268-4862-a43b-57501989f228" path="/var/lib/kubelet/pods/fc956fab-2268-4862-a43b-57501989f228/volumes" Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.738207 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"40c54c9a-246a-4dab-af73-779d4d8539e4","Type":"ContainerStarted","Data":"19f64885adeeb673d9cba11e78c8b70596ea5a7795eddab4d7f824f5be3cd3c6"} Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.738287 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"40c54c9a-246a-4dab-af73-779d4d8539e4","Type":"ContainerStarted","Data":"92c8db5180b73a5bbb803a67b1485926a5904ee84f310a02f878949deb43649d"} Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.742210 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"1b50fc49-3582-416c-9b89-0de07e733931","Type":"ContainerStarted","Data":"3b2c4fa8ecf48ebe29b25c30f72c2762525e314644186ec94469e2e547873058"} Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.742271 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"1b50fc49-3582-416c-9b89-0de07e733931","Type":"ContainerStarted","Data":"f3fd1649a2aded52e00c39e1c1d72e905fd324149ec6f8d6ddfb00f2c288864e"} Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.742295 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"1b50fc49-3582-416c-9b89-0de07e733931","Type":"ContainerStarted","Data":"0da722dd90642caf85fa0f11331565aec51183c8f53f1cf43b2602bc06530edf"} Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.744197 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"87285e2b-3522-41c7-800d-1ae2d92cfb18","Type":"ContainerDied","Data":"18cfa5baa8c39bacf83cd78d8fc431fe87c3aa67fd85875c08ab51ca5adc3b38"} Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.744253 4775 scope.go:117] "RemoveContainer" containerID="67681e6112c11f53a2adc89b791004105371ee1f5459b827d6fb6e8173a6d561" Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.744302 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.778214 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.778189111 podStartE2EDuration="2.778189111s" podCreationTimestamp="2026-01-23 14:27:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:27:35.764723941 +0000 UTC m=+1402.759552721" watchObservedRunningTime="2026-01-23 14:27:35.778189111 +0000 UTC m=+1402.773017891" Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.798963 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=1.7989346400000001 podStartE2EDuration="1.79893464s" podCreationTimestamp="2026-01-23 14:27:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:27:35.789211699 +0000 UTC m=+1402.784040489" watchObservedRunningTime="2026-01-23 14:27:35.79893464 +0000 UTC m=+1402.793763420" Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.821307 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.831525 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.840026 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:27:35 crc kubenswrapper[4775]: E0123 14:27:35.840342 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87285e2b-3522-41c7-800d-1ae2d92cfb18" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.840359 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="87285e2b-3522-41c7-800d-1ae2d92cfb18" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.840507 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="87285e2b-3522-41c7-800d-1ae2d92cfb18" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.841018 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.843548 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.858569 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.865262 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e96bb87-5923-457f-bf02-51a1182e90bc-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"3e96bb87-5923-457f-bf02-51a1182e90bc\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.865358 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj4np\" (UniqueName: \"kubernetes.io/projected/3e96bb87-5923-457f-bf02-51a1182e90bc-kube-api-access-pj4np\") pod \"nova-kuttl-scheduler-0\" (UID: \"3e96bb87-5923-457f-bf02-51a1182e90bc\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.966743 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj4np\" (UniqueName: \"kubernetes.io/projected/3e96bb87-5923-457f-bf02-51a1182e90bc-kube-api-access-pj4np\") pod \"nova-kuttl-scheduler-0\" (UID: \"3e96bb87-5923-457f-bf02-51a1182e90bc\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.966886 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e96bb87-5923-457f-bf02-51a1182e90bc-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"3e96bb87-5923-457f-bf02-51a1182e90bc\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.981057 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e96bb87-5923-457f-bf02-51a1182e90bc-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"3e96bb87-5923-457f-bf02-51a1182e90bc\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:35 crc kubenswrapper[4775]: I0123 14:27:35.981073 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj4np\" (UniqueName: \"kubernetes.io/projected/3e96bb87-5923-457f-bf02-51a1182e90bc-kube-api-access-pj4np\") pod \"nova-kuttl-scheduler-0\" (UID: \"3e96bb87-5923-457f-bf02-51a1182e90bc\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:36 crc kubenswrapper[4775]: I0123 14:27:36.158935 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:36 crc kubenswrapper[4775]: I0123 14:27:36.654058 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:27:36 crc kubenswrapper[4775]: W0123 14:27:36.659354 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e96bb87_5923_457f_bf02_51a1182e90bc.slice/crio-5bbc8cbd22e1e763806e59239a30a31f8865fb7589db1e6ad2f16cc53daa3460 WatchSource:0}: Error finding container 5bbc8cbd22e1e763806e59239a30a31f8865fb7589db1e6ad2f16cc53daa3460: Status 404 returned error can't find the container with id 5bbc8cbd22e1e763806e59239a30a31f8865fb7589db1e6ad2f16cc53daa3460 Jan 23 14:27:36 crc kubenswrapper[4775]: I0123 14:27:36.754684 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"3e96bb87-5923-457f-bf02-51a1182e90bc","Type":"ContainerStarted","Data":"5bbc8cbd22e1e763806e59239a30a31f8865fb7589db1e6ad2f16cc53daa3460"} Jan 23 14:27:37 crc kubenswrapper[4775]: I0123 14:27:37.725104 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87285e2b-3522-41c7-800d-1ae2d92cfb18" path="/var/lib/kubelet/pods/87285e2b-3522-41c7-800d-1ae2d92cfb18/volumes" Jan 23 14:27:37 crc kubenswrapper[4775]: I0123 14:27:37.768282 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"3e96bb87-5923-457f-bf02-51a1182e90bc","Type":"ContainerStarted","Data":"ab7afc6184df7a26515289f0daca80ac0daabcd95529ee2de4b1ba321ce191e3"} Jan 23 14:27:37 crc kubenswrapper[4775]: I0123 14:27:37.801332 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=2.801310353 podStartE2EDuration="2.801310353s" podCreationTimestamp="2026-01-23 14:27:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:27:37.792265111 +0000 UTC m=+1404.787093851" watchObservedRunningTime="2026-01-23 14:27:37.801310353 +0000 UTC m=+1404.796139113" Jan 23 14:27:39 crc kubenswrapper[4775]: I0123 14:27:39.482177 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:39 crc kubenswrapper[4775]: I0123 14:27:39.483692 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:41 crc kubenswrapper[4775]: I0123 14:27:41.160113 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:42 crc kubenswrapper[4775]: I0123 14:27:42.330416 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gpjzl"] Jan 23 14:27:42 crc kubenswrapper[4775]: I0123 14:27:42.332596 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gpjzl" Jan 23 14:27:42 crc kubenswrapper[4775]: I0123 14:27:42.353130 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gpjzl"] Jan 23 14:27:42 crc kubenswrapper[4775]: I0123 14:27:42.391173 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjkhv\" (UniqueName: \"kubernetes.io/projected/7fecf032-f999-4138-a4e5-e2673da92749-kube-api-access-hjkhv\") pod \"redhat-operators-gpjzl\" (UID: \"7fecf032-f999-4138-a4e5-e2673da92749\") " pod="openshift-marketplace/redhat-operators-gpjzl" Jan 23 14:27:42 crc kubenswrapper[4775]: I0123 14:27:42.391235 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fecf032-f999-4138-a4e5-e2673da92749-utilities\") pod \"redhat-operators-gpjzl\" (UID: \"7fecf032-f999-4138-a4e5-e2673da92749\") " pod="openshift-marketplace/redhat-operators-gpjzl" Jan 23 14:27:42 crc kubenswrapper[4775]: I0123 14:27:42.391295 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fecf032-f999-4138-a4e5-e2673da92749-catalog-content\") pod \"redhat-operators-gpjzl\" (UID: \"7fecf032-f999-4138-a4e5-e2673da92749\") " pod="openshift-marketplace/redhat-operators-gpjzl" Jan 23 14:27:42 crc kubenswrapper[4775]: I0123 14:27:42.492498 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fecf032-f999-4138-a4e5-e2673da92749-utilities\") pod \"redhat-operators-gpjzl\" (UID: \"7fecf032-f999-4138-a4e5-e2673da92749\") " pod="openshift-marketplace/redhat-operators-gpjzl" Jan 23 14:27:42 crc kubenswrapper[4775]: I0123 14:27:42.492577 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fecf032-f999-4138-a4e5-e2673da92749-catalog-content\") pod \"redhat-operators-gpjzl\" (UID: \"7fecf032-f999-4138-a4e5-e2673da92749\") " pod="openshift-marketplace/redhat-operators-gpjzl" Jan 23 14:27:42 crc kubenswrapper[4775]: I0123 14:27:42.492648 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjkhv\" (UniqueName: \"kubernetes.io/projected/7fecf032-f999-4138-a4e5-e2673da92749-kube-api-access-hjkhv\") pod \"redhat-operators-gpjzl\" (UID: \"7fecf032-f999-4138-a4e5-e2673da92749\") " pod="openshift-marketplace/redhat-operators-gpjzl" Jan 23 14:27:42 crc kubenswrapper[4775]: I0123 14:27:42.493389 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fecf032-f999-4138-a4e5-e2673da92749-utilities\") pod \"redhat-operators-gpjzl\" (UID: \"7fecf032-f999-4138-a4e5-e2673da92749\") " pod="openshift-marketplace/redhat-operators-gpjzl" Jan 23 14:27:42 crc kubenswrapper[4775]: I0123 14:27:42.493487 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fecf032-f999-4138-a4e5-e2673da92749-catalog-content\") pod \"redhat-operators-gpjzl\" (UID: \"7fecf032-f999-4138-a4e5-e2673da92749\") " pod="openshift-marketplace/redhat-operators-gpjzl" Jan 23 14:27:42 crc kubenswrapper[4775]: I0123 14:27:42.522209 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjkhv\" (UniqueName: \"kubernetes.io/projected/7fecf032-f999-4138-a4e5-e2673da92749-kube-api-access-hjkhv\") pod \"redhat-operators-gpjzl\" (UID: \"7fecf032-f999-4138-a4e5-e2673da92749\") " pod="openshift-marketplace/redhat-operators-gpjzl" Jan 23 14:27:42 crc kubenswrapper[4775]: I0123 14:27:42.707383 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gpjzl" Jan 23 14:27:43 crc kubenswrapper[4775]: I0123 14:27:43.172576 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gpjzl"] Jan 23 14:27:43 crc kubenswrapper[4775]: I0123 14:27:43.818450 4775 generic.go:334] "Generic (PLEG): container finished" podID="7fecf032-f999-4138-a4e5-e2673da92749" containerID="50e0bf4586a1ffec8c1f26b17ba6d579e11e79688a064b7a64e866f14bc1d1fd" exitCode=0 Jan 23 14:27:43 crc kubenswrapper[4775]: I0123 14:27:43.818556 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gpjzl" event={"ID":"7fecf032-f999-4138-a4e5-e2673da92749","Type":"ContainerDied","Data":"50e0bf4586a1ffec8c1f26b17ba6d579e11e79688a064b7a64e866f14bc1d1fd"} Jan 23 14:27:43 crc kubenswrapper[4775]: I0123 14:27:43.818765 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gpjzl" event={"ID":"7fecf032-f999-4138-a4e5-e2673da92749","Type":"ContainerStarted","Data":"f19856368022300caf4e899bc52e3098520248d22b5c1d6097fb57b313c3d83f"} Jan 23 14:27:43 crc kubenswrapper[4775]: I0123 14:27:43.820170 4775 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 14:27:44 crc kubenswrapper[4775]: I0123 14:27:44.116360 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:44 crc kubenswrapper[4775]: I0123 14:27:44.116679 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:44 crc kubenswrapper[4775]: I0123 14:27:44.481869 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:44 crc kubenswrapper[4775]: I0123 14:27:44.481950 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:45 crc kubenswrapper[4775]: I0123 14:27:45.200469 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="40c54c9a-246a-4dab-af73-779d4d8539e4" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.136:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:27:45 crc kubenswrapper[4775]: I0123 14:27:45.200430 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="40c54c9a-246a-4dab-af73-779d4d8539e4" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.136:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:27:45 crc kubenswrapper[4775]: I0123 14:27:45.564322 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="1b50fc49-3582-416c-9b89-0de07e733931" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.137:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:27:45 crc kubenswrapper[4775]: I0123 14:27:45.564779 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="1b50fc49-3582-416c-9b89-0de07e733931" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.137:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:27:45 crc kubenswrapper[4775]: I0123 14:27:45.835727 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gpjzl" event={"ID":"7fecf032-f999-4138-a4e5-e2673da92749","Type":"ContainerStarted","Data":"f92d2b382016c85e8331d50289c41b5d13ba2d592fc5335d3ef5d073c2570f1e"} Jan 23 14:27:46 crc kubenswrapper[4775]: I0123 14:27:46.160205 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:46 crc kubenswrapper[4775]: I0123 14:27:46.206411 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:46 crc kubenswrapper[4775]: I0123 14:27:46.861562 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:27:48 crc kubenswrapper[4775]: I0123 14:27:48.857628 4775 generic.go:334] "Generic (PLEG): container finished" podID="7fecf032-f999-4138-a4e5-e2673da92749" containerID="f92d2b382016c85e8331d50289c41b5d13ba2d592fc5335d3ef5d073c2570f1e" exitCode=0 Jan 23 14:27:48 crc kubenswrapper[4775]: I0123 14:27:48.857969 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gpjzl" event={"ID":"7fecf032-f999-4138-a4e5-e2673da92749","Type":"ContainerDied","Data":"f92d2b382016c85e8331d50289c41b5d13ba2d592fc5335d3ef5d073c2570f1e"} Jan 23 14:27:50 crc kubenswrapper[4775]: I0123 14:27:50.879433 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gpjzl" event={"ID":"7fecf032-f999-4138-a4e5-e2673da92749","Type":"ContainerStarted","Data":"9e302d0bf0a17106f01745ef27e10d10f2fc8dbbd317df43d99c71400c94bd8b"} Jan 23 14:27:50 crc kubenswrapper[4775]: I0123 14:27:50.901336 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gpjzl" podStartSLOduration=2.826238434 podStartE2EDuration="8.901306744s" podCreationTimestamp="2026-01-23 14:27:42 +0000 UTC" firstStartedPulling="2026-01-23 14:27:43.81988104 +0000 UTC m=+1410.814709780" lastFinishedPulling="2026-01-23 14:27:49.89494936 +0000 UTC m=+1416.889778090" observedRunningTime="2026-01-23 14:27:50.899506092 +0000 UTC m=+1417.894334872" watchObservedRunningTime="2026-01-23 14:27:50.901306744 +0000 UTC m=+1417.896135534" Jan 23 14:27:52 crc kubenswrapper[4775]: I0123 14:27:52.708505 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gpjzl" Jan 23 14:27:52 crc kubenswrapper[4775]: I0123 14:27:52.708877 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gpjzl" Jan 23 14:27:53 crc kubenswrapper[4775]: I0123 14:27:53.772850 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gpjzl" podUID="7fecf032-f999-4138-a4e5-e2673da92749" containerName="registry-server" probeResult="failure" output=< Jan 23 14:27:53 crc kubenswrapper[4775]: timeout: failed to connect service ":50051" within 1s Jan 23 14:27:53 crc kubenswrapper[4775]: > Jan 23 14:27:54 crc kubenswrapper[4775]: I0123 14:27:54.121957 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:54 crc kubenswrapper[4775]: I0123 14:27:54.122583 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:54 crc kubenswrapper[4775]: I0123 14:27:54.126571 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:54 crc kubenswrapper[4775]: I0123 14:27:54.127464 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:54 crc kubenswrapper[4775]: I0123 14:27:54.485920 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:54 crc kubenswrapper[4775]: I0123 14:27:54.486493 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:54 crc kubenswrapper[4775]: I0123 14:27:54.489872 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:54 crc kubenswrapper[4775]: I0123 14:27:54.489946 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:27:54 crc kubenswrapper[4775]: I0123 14:27:54.912976 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:27:54 crc kubenswrapper[4775]: I0123 14:27:54.919326 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:28:02 crc kubenswrapper[4775]: I0123 14:28:02.783118 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gpjzl" Jan 23 14:28:02 crc kubenswrapper[4775]: I0123 14:28:02.857560 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gpjzl" Jan 23 14:28:03 crc kubenswrapper[4775]: I0123 14:28:03.894112 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gpjzl"] Jan 23 14:28:04 crc kubenswrapper[4775]: I0123 14:28:04.017843 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gpjzl" podUID="7fecf032-f999-4138-a4e5-e2673da92749" containerName="registry-server" containerID="cri-o://9e302d0bf0a17106f01745ef27e10d10f2fc8dbbd317df43d99c71400c94bd8b" gracePeriod=2 Jan 23 14:28:04 crc kubenswrapper[4775]: I0123 14:28:04.551710 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gpjzl" Jan 23 14:28:04 crc kubenswrapper[4775]: I0123 14:28:04.583589 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fecf032-f999-4138-a4e5-e2673da92749-utilities\") pod \"7fecf032-f999-4138-a4e5-e2673da92749\" (UID: \"7fecf032-f999-4138-a4e5-e2673da92749\") " Jan 23 14:28:04 crc kubenswrapper[4775]: I0123 14:28:04.583832 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fecf032-f999-4138-a4e5-e2673da92749-catalog-content\") pod \"7fecf032-f999-4138-a4e5-e2673da92749\" (UID: \"7fecf032-f999-4138-a4e5-e2673da92749\") " Jan 23 14:28:04 crc kubenswrapper[4775]: I0123 14:28:04.583923 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjkhv\" (UniqueName: \"kubernetes.io/projected/7fecf032-f999-4138-a4e5-e2673da92749-kube-api-access-hjkhv\") pod \"7fecf032-f999-4138-a4e5-e2673da92749\" (UID: \"7fecf032-f999-4138-a4e5-e2673da92749\") " Jan 23 14:28:04 crc kubenswrapper[4775]: I0123 14:28:04.584349 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fecf032-f999-4138-a4e5-e2673da92749-utilities" (OuterVolumeSpecName: "utilities") pod "7fecf032-f999-4138-a4e5-e2673da92749" (UID: "7fecf032-f999-4138-a4e5-e2673da92749"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:28:04 crc kubenswrapper[4775]: I0123 14:28:04.586512 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fecf032-f999-4138-a4e5-e2673da92749-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:28:04 crc kubenswrapper[4775]: I0123 14:28:04.591345 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fecf032-f999-4138-a4e5-e2673da92749-kube-api-access-hjkhv" (OuterVolumeSpecName: "kube-api-access-hjkhv") pod "7fecf032-f999-4138-a4e5-e2673da92749" (UID: "7fecf032-f999-4138-a4e5-e2673da92749"). InnerVolumeSpecName "kube-api-access-hjkhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:28:04 crc kubenswrapper[4775]: I0123 14:28:04.688595 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjkhv\" (UniqueName: \"kubernetes.io/projected/7fecf032-f999-4138-a4e5-e2673da92749-kube-api-access-hjkhv\") on node \"crc\" DevicePath \"\"" Jan 23 14:28:04 crc kubenswrapper[4775]: I0123 14:28:04.696676 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fecf032-f999-4138-a4e5-e2673da92749-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7fecf032-f999-4138-a4e5-e2673da92749" (UID: "7fecf032-f999-4138-a4e5-e2673da92749"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:28:04 crc kubenswrapper[4775]: I0123 14:28:04.789685 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fecf032-f999-4138-a4e5-e2673da92749-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:28:05 crc kubenswrapper[4775]: I0123 14:28:05.028763 4775 generic.go:334] "Generic (PLEG): container finished" podID="7fecf032-f999-4138-a4e5-e2673da92749" containerID="9e302d0bf0a17106f01745ef27e10d10f2fc8dbbd317df43d99c71400c94bd8b" exitCode=0 Jan 23 14:28:05 crc kubenswrapper[4775]: I0123 14:28:05.028876 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gpjzl" Jan 23 14:28:05 crc kubenswrapper[4775]: I0123 14:28:05.028906 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gpjzl" event={"ID":"7fecf032-f999-4138-a4e5-e2673da92749","Type":"ContainerDied","Data":"9e302d0bf0a17106f01745ef27e10d10f2fc8dbbd317df43d99c71400c94bd8b"} Jan 23 14:28:05 crc kubenswrapper[4775]: I0123 14:28:05.029408 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gpjzl" event={"ID":"7fecf032-f999-4138-a4e5-e2673da92749","Type":"ContainerDied","Data":"f19856368022300caf4e899bc52e3098520248d22b5c1d6097fb57b313c3d83f"} Jan 23 14:28:05 crc kubenswrapper[4775]: I0123 14:28:05.029458 4775 scope.go:117] "RemoveContainer" containerID="9e302d0bf0a17106f01745ef27e10d10f2fc8dbbd317df43d99c71400c94bd8b" Jan 23 14:28:05 crc kubenswrapper[4775]: I0123 14:28:05.060734 4775 scope.go:117] "RemoveContainer" containerID="f92d2b382016c85e8331d50289c41b5d13ba2d592fc5335d3ef5d073c2570f1e" Jan 23 14:28:05 crc kubenswrapper[4775]: I0123 14:28:05.080520 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gpjzl"] Jan 23 14:28:05 crc kubenswrapper[4775]: I0123 14:28:05.086102 4775 scope.go:117] "RemoveContainer" containerID="50e0bf4586a1ffec8c1f26b17ba6d579e11e79688a064b7a64e866f14bc1d1fd" Jan 23 14:28:05 crc kubenswrapper[4775]: I0123 14:28:05.097544 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gpjzl"] Jan 23 14:28:05 crc kubenswrapper[4775]: I0123 14:28:05.123043 4775 scope.go:117] "RemoveContainer" containerID="9e302d0bf0a17106f01745ef27e10d10f2fc8dbbd317df43d99c71400c94bd8b" Jan 23 14:28:05 crc kubenswrapper[4775]: E0123 14:28:05.129081 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e302d0bf0a17106f01745ef27e10d10f2fc8dbbd317df43d99c71400c94bd8b\": container with ID starting with 9e302d0bf0a17106f01745ef27e10d10f2fc8dbbd317df43d99c71400c94bd8b not found: ID does not exist" containerID="9e302d0bf0a17106f01745ef27e10d10f2fc8dbbd317df43d99c71400c94bd8b" Jan 23 14:28:05 crc kubenswrapper[4775]: I0123 14:28:05.129129 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e302d0bf0a17106f01745ef27e10d10f2fc8dbbd317df43d99c71400c94bd8b"} err="failed to get container status \"9e302d0bf0a17106f01745ef27e10d10f2fc8dbbd317df43d99c71400c94bd8b\": rpc error: code = NotFound desc = could not find container \"9e302d0bf0a17106f01745ef27e10d10f2fc8dbbd317df43d99c71400c94bd8b\": container with ID starting with 9e302d0bf0a17106f01745ef27e10d10f2fc8dbbd317df43d99c71400c94bd8b not found: ID does not exist" Jan 23 14:28:05 crc kubenswrapper[4775]: I0123 14:28:05.129161 4775 scope.go:117] "RemoveContainer" containerID="f92d2b382016c85e8331d50289c41b5d13ba2d592fc5335d3ef5d073c2570f1e" Jan 23 14:28:05 crc kubenswrapper[4775]: E0123 14:28:05.129684 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f92d2b382016c85e8331d50289c41b5d13ba2d592fc5335d3ef5d073c2570f1e\": container with ID starting with f92d2b382016c85e8331d50289c41b5d13ba2d592fc5335d3ef5d073c2570f1e not found: ID does not exist" containerID="f92d2b382016c85e8331d50289c41b5d13ba2d592fc5335d3ef5d073c2570f1e" Jan 23 14:28:05 crc kubenswrapper[4775]: I0123 14:28:05.129740 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f92d2b382016c85e8331d50289c41b5d13ba2d592fc5335d3ef5d073c2570f1e"} err="failed to get container status \"f92d2b382016c85e8331d50289c41b5d13ba2d592fc5335d3ef5d073c2570f1e\": rpc error: code = NotFound desc = could not find container \"f92d2b382016c85e8331d50289c41b5d13ba2d592fc5335d3ef5d073c2570f1e\": container with ID starting with f92d2b382016c85e8331d50289c41b5d13ba2d592fc5335d3ef5d073c2570f1e not found: ID does not exist" Jan 23 14:28:05 crc kubenswrapper[4775]: I0123 14:28:05.129771 4775 scope.go:117] "RemoveContainer" containerID="50e0bf4586a1ffec8c1f26b17ba6d579e11e79688a064b7a64e866f14bc1d1fd" Jan 23 14:28:05 crc kubenswrapper[4775]: E0123 14:28:05.130106 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50e0bf4586a1ffec8c1f26b17ba6d579e11e79688a064b7a64e866f14bc1d1fd\": container with ID starting with 50e0bf4586a1ffec8c1f26b17ba6d579e11e79688a064b7a64e866f14bc1d1fd not found: ID does not exist" containerID="50e0bf4586a1ffec8c1f26b17ba6d579e11e79688a064b7a64e866f14bc1d1fd" Jan 23 14:28:05 crc kubenswrapper[4775]: I0123 14:28:05.130143 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50e0bf4586a1ffec8c1f26b17ba6d579e11e79688a064b7a64e866f14bc1d1fd"} err="failed to get container status \"50e0bf4586a1ffec8c1f26b17ba6d579e11e79688a064b7a64e866f14bc1d1fd\": rpc error: code = NotFound desc = could not find container \"50e0bf4586a1ffec8c1f26b17ba6d579e11e79688a064b7a64e866f14bc1d1fd\": container with ID starting with 50e0bf4586a1ffec8c1f26b17ba6d579e11e79688a064b7a64e866f14bc1d1fd not found: ID does not exist" Jan 23 14:28:05 crc kubenswrapper[4775]: I0123 14:28:05.734517 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fecf032-f999-4138-a4e5-e2673da92749" path="/var/lib/kubelet/pods/7fecf032-f999-4138-a4e5-e2673da92749/volumes" Jan 23 14:28:23 crc kubenswrapper[4775]: I0123 14:28:23.219329 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:28:23 crc kubenswrapper[4775]: I0123 14:28:23.219830 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:28:53 crc kubenswrapper[4775]: I0123 14:28:53.219664 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:28:53 crc kubenswrapper[4775]: I0123 14:28:53.220613 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:29:23 crc kubenswrapper[4775]: I0123 14:29:23.218714 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:29:23 crc kubenswrapper[4775]: I0123 14:29:23.219603 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:29:23 crc kubenswrapper[4775]: I0123 14:29:23.219719 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:29:23 crc kubenswrapper[4775]: I0123 14:29:23.220862 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342"} pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 14:29:23 crc kubenswrapper[4775]: I0123 14:29:23.220956 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" containerID="cri-o://69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" gracePeriod=600 Jan 23 14:29:23 crc kubenswrapper[4775]: E0123 14:29:23.353291 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:29:23 crc kubenswrapper[4775]: I0123 14:29:23.833741 4775 generic.go:334] "Generic (PLEG): container finished" podID="4fea0767-0566-4214-855d-ed0373946271" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" exitCode=0 Jan 23 14:29:23 crc kubenswrapper[4775]: I0123 14:29:23.833861 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerDied","Data":"69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342"} Jan 23 14:29:23 crc kubenswrapper[4775]: I0123 14:29:23.833957 4775 scope.go:117] "RemoveContainer" containerID="a5634c941e351401aed478dd8e700e6d7b7de6241fab2a08ba60719db5eab596" Jan 23 14:29:23 crc kubenswrapper[4775]: I0123 14:29:23.835289 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:29:23 crc kubenswrapper[4775]: E0123 14:29:23.836135 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:29:38 crc kubenswrapper[4775]: I0123 14:29:38.713987 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:29:38 crc kubenswrapper[4775]: E0123 14:29:38.715065 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:29:44 crc kubenswrapper[4775]: I0123 14:29:44.996109 4775 scope.go:117] "RemoveContainer" containerID="a13f8eef0e3c756f922ffa047c8687839a95c0c6de399f124374a283f7dcaa06" Jan 23 14:29:45 crc kubenswrapper[4775]: I0123 14:29:45.037404 4775 scope.go:117] "RemoveContainer" containerID="e4d3d7427f456db9c410656944ad8601abb63e17de245cf5ef8fa44d9943c71d" Jan 23 14:29:51 crc kubenswrapper[4775]: I0123 14:29:51.713792 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:29:51 crc kubenswrapper[4775]: E0123 14:29:51.714532 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:30:00 crc kubenswrapper[4775]: I0123 14:30:00.169078 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486310-grwcb"] Jan 23 14:30:00 crc kubenswrapper[4775]: E0123 14:30:00.170283 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fecf032-f999-4138-a4e5-e2673da92749" containerName="extract-content" Jan 23 14:30:00 crc kubenswrapper[4775]: I0123 14:30:00.170306 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fecf032-f999-4138-a4e5-e2673da92749" containerName="extract-content" Jan 23 14:30:00 crc kubenswrapper[4775]: E0123 14:30:00.170329 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fecf032-f999-4138-a4e5-e2673da92749" containerName="registry-server" Jan 23 14:30:00 crc kubenswrapper[4775]: I0123 14:30:00.170344 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fecf032-f999-4138-a4e5-e2673da92749" containerName="registry-server" Jan 23 14:30:00 crc kubenswrapper[4775]: E0123 14:30:00.170366 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fecf032-f999-4138-a4e5-e2673da92749" containerName="extract-utilities" Jan 23 14:30:00 crc kubenswrapper[4775]: I0123 14:30:00.170379 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fecf032-f999-4138-a4e5-e2673da92749" containerName="extract-utilities" Jan 23 14:30:00 crc kubenswrapper[4775]: I0123 14:30:00.170639 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fecf032-f999-4138-a4e5-e2673da92749" containerName="registry-server" Jan 23 14:30:00 crc kubenswrapper[4775]: I0123 14:30:00.171571 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-grwcb" Jan 23 14:30:00 crc kubenswrapper[4775]: I0123 14:30:00.175254 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 14:30:00 crc kubenswrapper[4775]: I0123 14:30:00.175482 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 14:30:00 crc kubenswrapper[4775]: I0123 14:30:00.187007 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486310-grwcb"] Jan 23 14:30:00 crc kubenswrapper[4775]: I0123 14:30:00.370791 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f319b79a-801c-4377-b8a2-cdc4435feb06-secret-volume\") pod \"collect-profiles-29486310-grwcb\" (UID: \"f319b79a-801c-4377-b8a2-cdc4435feb06\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-grwcb" Jan 23 14:30:00 crc kubenswrapper[4775]: I0123 14:30:00.371265 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfgvp\" (UniqueName: \"kubernetes.io/projected/f319b79a-801c-4377-b8a2-cdc4435feb06-kube-api-access-nfgvp\") pod \"collect-profiles-29486310-grwcb\" (UID: \"f319b79a-801c-4377-b8a2-cdc4435feb06\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-grwcb" Jan 23 14:30:00 crc kubenswrapper[4775]: I0123 14:30:00.371556 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f319b79a-801c-4377-b8a2-cdc4435feb06-config-volume\") pod \"collect-profiles-29486310-grwcb\" (UID: \"f319b79a-801c-4377-b8a2-cdc4435feb06\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-grwcb" Jan 23 14:30:00 crc kubenswrapper[4775]: I0123 14:30:00.473042 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f319b79a-801c-4377-b8a2-cdc4435feb06-config-volume\") pod \"collect-profiles-29486310-grwcb\" (UID: \"f319b79a-801c-4377-b8a2-cdc4435feb06\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-grwcb" Jan 23 14:30:00 crc kubenswrapper[4775]: I0123 14:30:00.473205 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f319b79a-801c-4377-b8a2-cdc4435feb06-secret-volume\") pod \"collect-profiles-29486310-grwcb\" (UID: \"f319b79a-801c-4377-b8a2-cdc4435feb06\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-grwcb" Jan 23 14:30:00 crc kubenswrapper[4775]: I0123 14:30:00.473243 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfgvp\" (UniqueName: \"kubernetes.io/projected/f319b79a-801c-4377-b8a2-cdc4435feb06-kube-api-access-nfgvp\") pod \"collect-profiles-29486310-grwcb\" (UID: \"f319b79a-801c-4377-b8a2-cdc4435feb06\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-grwcb" Jan 23 14:30:00 crc kubenswrapper[4775]: I0123 14:30:00.475043 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f319b79a-801c-4377-b8a2-cdc4435feb06-config-volume\") pod \"collect-profiles-29486310-grwcb\" (UID: \"f319b79a-801c-4377-b8a2-cdc4435feb06\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-grwcb" Jan 23 14:30:00 crc kubenswrapper[4775]: I0123 14:30:00.481486 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f319b79a-801c-4377-b8a2-cdc4435feb06-secret-volume\") pod \"collect-profiles-29486310-grwcb\" (UID: \"f319b79a-801c-4377-b8a2-cdc4435feb06\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-grwcb" Jan 23 14:30:00 crc kubenswrapper[4775]: I0123 14:30:00.502329 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfgvp\" (UniqueName: \"kubernetes.io/projected/f319b79a-801c-4377-b8a2-cdc4435feb06-kube-api-access-nfgvp\") pod \"collect-profiles-29486310-grwcb\" (UID: \"f319b79a-801c-4377-b8a2-cdc4435feb06\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-grwcb" Jan 23 14:30:00 crc kubenswrapper[4775]: I0123 14:30:00.796844 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-grwcb" Jan 23 14:30:01 crc kubenswrapper[4775]: I0123 14:30:01.285441 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486310-grwcb"] Jan 23 14:30:02 crc kubenswrapper[4775]: I0123 14:30:02.239162 4775 generic.go:334] "Generic (PLEG): container finished" podID="f319b79a-801c-4377-b8a2-cdc4435feb06" containerID="29d1ea4fd73c7e0cf4e80e994a13c06aab543f04f718e1659d74ebea4f313156" exitCode=0 Jan 23 14:30:02 crc kubenswrapper[4775]: I0123 14:30:02.239219 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-grwcb" event={"ID":"f319b79a-801c-4377-b8a2-cdc4435feb06","Type":"ContainerDied","Data":"29d1ea4fd73c7e0cf4e80e994a13c06aab543f04f718e1659d74ebea4f313156"} Jan 23 14:30:02 crc kubenswrapper[4775]: I0123 14:30:02.239608 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-grwcb" event={"ID":"f319b79a-801c-4377-b8a2-cdc4435feb06","Type":"ContainerStarted","Data":"08f3baaf299a671db85f9ae20a89841b65983070de7bf5dd035bb4fe16777b95"} Jan 23 14:30:03 crc kubenswrapper[4775]: I0123 14:30:03.610148 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-grwcb" Jan 23 14:30:03 crc kubenswrapper[4775]: I0123 14:30:03.729665 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f319b79a-801c-4377-b8a2-cdc4435feb06-config-volume\") pod \"f319b79a-801c-4377-b8a2-cdc4435feb06\" (UID: \"f319b79a-801c-4377-b8a2-cdc4435feb06\") " Jan 23 14:30:03 crc kubenswrapper[4775]: I0123 14:30:03.729945 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfgvp\" (UniqueName: \"kubernetes.io/projected/f319b79a-801c-4377-b8a2-cdc4435feb06-kube-api-access-nfgvp\") pod \"f319b79a-801c-4377-b8a2-cdc4435feb06\" (UID: \"f319b79a-801c-4377-b8a2-cdc4435feb06\") " Jan 23 14:30:03 crc kubenswrapper[4775]: I0123 14:30:03.730011 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f319b79a-801c-4377-b8a2-cdc4435feb06-secret-volume\") pod \"f319b79a-801c-4377-b8a2-cdc4435feb06\" (UID: \"f319b79a-801c-4377-b8a2-cdc4435feb06\") " Jan 23 14:30:03 crc kubenswrapper[4775]: I0123 14:30:03.730567 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f319b79a-801c-4377-b8a2-cdc4435feb06-config-volume" (OuterVolumeSpecName: "config-volume") pod "f319b79a-801c-4377-b8a2-cdc4435feb06" (UID: "f319b79a-801c-4377-b8a2-cdc4435feb06"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:30:03 crc kubenswrapper[4775]: I0123 14:30:03.735846 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f319b79a-801c-4377-b8a2-cdc4435feb06-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f319b79a-801c-4377-b8a2-cdc4435feb06" (UID: "f319b79a-801c-4377-b8a2-cdc4435feb06"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:30:03 crc kubenswrapper[4775]: I0123 14:30:03.738139 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f319b79a-801c-4377-b8a2-cdc4435feb06-kube-api-access-nfgvp" (OuterVolumeSpecName: "kube-api-access-nfgvp") pod "f319b79a-801c-4377-b8a2-cdc4435feb06" (UID: "f319b79a-801c-4377-b8a2-cdc4435feb06"). InnerVolumeSpecName "kube-api-access-nfgvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:30:03 crc kubenswrapper[4775]: I0123 14:30:03.833921 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfgvp\" (UniqueName: \"kubernetes.io/projected/f319b79a-801c-4377-b8a2-cdc4435feb06-kube-api-access-nfgvp\") on node \"crc\" DevicePath \"\"" Jan 23 14:30:03 crc kubenswrapper[4775]: I0123 14:30:03.833975 4775 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f319b79a-801c-4377-b8a2-cdc4435feb06-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 14:30:03 crc kubenswrapper[4775]: I0123 14:30:03.833996 4775 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f319b79a-801c-4377-b8a2-cdc4435feb06-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 14:30:04 crc kubenswrapper[4775]: I0123 14:30:04.269454 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-grwcb" event={"ID":"f319b79a-801c-4377-b8a2-cdc4435feb06","Type":"ContainerDied","Data":"08f3baaf299a671db85f9ae20a89841b65983070de7bf5dd035bb4fe16777b95"} Jan 23 14:30:04 crc kubenswrapper[4775]: I0123 14:30:04.269728 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08f3baaf299a671db85f9ae20a89841b65983070de7bf5dd035bb4fe16777b95" Jan 23 14:30:04 crc kubenswrapper[4775]: I0123 14:30:04.269542 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486310-grwcb" Jan 23 14:30:04 crc kubenswrapper[4775]: I0123 14:30:04.713568 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:30:04 crc kubenswrapper[4775]: E0123 14:30:04.714365 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:30:15 crc kubenswrapper[4775]: I0123 14:30:15.714089 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:30:15 crc kubenswrapper[4775]: E0123 14:30:15.715200 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:30:26 crc kubenswrapper[4775]: I0123 14:30:26.714903 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:30:26 crc kubenswrapper[4775]: E0123 14:30:26.715912 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:30:40 crc kubenswrapper[4775]: I0123 14:30:40.714340 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:30:40 crc kubenswrapper[4775]: E0123 14:30:40.715256 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:30:45 crc kubenswrapper[4775]: I0123 14:30:45.115595 4775 scope.go:117] "RemoveContainer" containerID="2ee19493765c2e784fbd1d7e401c527b26da5317dbb06d292407f1d608775812" Jan 23 14:30:52 crc kubenswrapper[4775]: I0123 14:30:52.714584 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:30:52 crc kubenswrapper[4775]: E0123 14:30:52.716007 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.387984 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-rwhvl"] Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.395187 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf"] Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.402264 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-rwhvl"] Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.409767 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bgpzf"] Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.588272 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novaapi74fa-account-delete-hs5ds"] Jan 23 14:31:04 crc kubenswrapper[4775]: E0123 14:31:04.588557 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f319b79a-801c-4377-b8a2-cdc4435feb06" containerName="collect-profiles" Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.588572 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f319b79a-801c-4377-b8a2-cdc4435feb06" containerName="collect-profiles" Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.588730 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f319b79a-801c-4377-b8a2-cdc4435feb06" containerName="collect-profiles" Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.589234 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapi74fa-account-delete-hs5ds" Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.608641 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.608941 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="1b50fc49-3582-416c-9b89-0de07e733931" containerName="nova-kuttl-metadata-log" containerID="cri-o://f3fd1649a2aded52e00c39e1c1d72e905fd324149ec6f8d6ddfb00f2c288864e" gracePeriod=30 Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.609019 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="1b50fc49-3582-416c-9b89-0de07e733931" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://3b2c4fa8ecf48ebe29b25c30f72c2762525e314644186ec94469e2e547873058" gracePeriod=30 Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.656435 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novacell0dec4-account-delete-2b7mr"] Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.657630 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell0dec4-account-delete-2b7mr" Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.667479 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.667859 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="3e96bb87-5923-457f-bf02-51a1182e90bc" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://ab7afc6184df7a26515289f0daca80ac0daabcd95529ee2de4b1ba321ce191e3" gracePeriod=30 Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.680781 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novaapi74fa-account-delete-hs5ds"] Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.713880 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76"] Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.742876 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell0dec4-account-delete-2b7mr"] Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.763079 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.763267 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podUID="5c5ea649-3ec6-4684-a543-92cbb2561c2c" containerName="nova-kuttl-cell0-conductor-conductor" containerID="cri-o://0fc3116ad5e11a579023342a2bde7e94e9992b7817bc89662a590eddceef91c7" gracePeriod=30 Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.769491 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxlnq\" (UniqueName: \"kubernetes.io/projected/e62166aa-4f54-4eb0-aae1-69113a424df6-kube-api-access-wxlnq\") pod \"novaapi74fa-account-delete-hs5ds\" (UID: \"e62166aa-4f54-4eb0-aae1-69113a424df6\") " pod="nova-kuttl-default/novaapi74fa-account-delete-hs5ds" Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.769568 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f48w\" (UniqueName: \"kubernetes.io/projected/74a79494-7611-49ab-9b32-167dbeba6bb6-kube-api-access-5f48w\") pod \"novacell0dec4-account-delete-2b7mr\" (UID: \"74a79494-7611-49ab-9b32-167dbeba6bb6\") " pod="nova-kuttl-default/novacell0dec4-account-delete-2b7mr" Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.769651 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e62166aa-4f54-4eb0-aae1-69113a424df6-operator-scripts\") pod \"novaapi74fa-account-delete-hs5ds\" (UID: \"e62166aa-4f54-4eb0-aae1-69113a424df6\") " pod="nova-kuttl-default/novaapi74fa-account-delete-hs5ds" Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.769708 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74a79494-7611-49ab-9b32-167dbeba6bb6-operator-scripts\") pod \"novacell0dec4-account-delete-2b7mr\" (UID: \"74a79494-7611-49ab-9b32-167dbeba6bb6\") " pod="nova-kuttl-default/novacell0dec4-account-delete-2b7mr" Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.786136 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-jhf76"] Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.808934 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.809438 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="40c54c9a-246a-4dab-af73-779d4d8539e4" containerName="nova-kuttl-api-log" containerID="cri-o://92c8db5180b73a5bbb803a67b1485926a5904ee84f310a02f878949deb43649d" gracePeriod=30 Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.809949 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="40c54c9a-246a-4dab-af73-779d4d8539e4" containerName="nova-kuttl-api-api" containerID="cri-o://19f64885adeeb673d9cba11e78c8b70596ea5a7795eddab4d7f824f5be3cd3c6" gracePeriod=30 Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.824585 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novacell1fcdd-account-delete-xg5hq"] Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.825656 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell1fcdd-account-delete-xg5hq" Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.853351 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell1fcdd-account-delete-xg5hq"] Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.867864 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.868107 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podUID="d6487ecc-f390-4837-8097-15e1b0bc28ac" containerName="nova-kuttl-cell1-novncproxy-novncproxy" containerID="cri-o://e9cd293241d6fb23305cd22644b9ba266d18f24d704393111b6fac686f6c275a" gracePeriod=30 Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.870939 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxlnq\" (UniqueName: \"kubernetes.io/projected/e62166aa-4f54-4eb0-aae1-69113a424df6-kube-api-access-wxlnq\") pod \"novaapi74fa-account-delete-hs5ds\" (UID: \"e62166aa-4f54-4eb0-aae1-69113a424df6\") " pod="nova-kuttl-default/novaapi74fa-account-delete-hs5ds" Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.870989 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f48w\" (UniqueName: \"kubernetes.io/projected/74a79494-7611-49ab-9b32-167dbeba6bb6-kube-api-access-5f48w\") pod \"novacell0dec4-account-delete-2b7mr\" (UID: \"74a79494-7611-49ab-9b32-167dbeba6bb6\") " pod="nova-kuttl-default/novacell0dec4-account-delete-2b7mr" Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.871040 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e62166aa-4f54-4eb0-aae1-69113a424df6-operator-scripts\") pod \"novaapi74fa-account-delete-hs5ds\" (UID: \"e62166aa-4f54-4eb0-aae1-69113a424df6\") " pod="nova-kuttl-default/novaapi74fa-account-delete-hs5ds" Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.871075 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74a79494-7611-49ab-9b32-167dbeba6bb6-operator-scripts\") pod \"novacell0dec4-account-delete-2b7mr\" (UID: \"74a79494-7611-49ab-9b32-167dbeba6bb6\") " pod="nova-kuttl-default/novacell0dec4-account-delete-2b7mr" Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.872558 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e62166aa-4f54-4eb0-aae1-69113a424df6-operator-scripts\") pod \"novaapi74fa-account-delete-hs5ds\" (UID: \"e62166aa-4f54-4eb0-aae1-69113a424df6\") " pod="nova-kuttl-default/novaapi74fa-account-delete-hs5ds" Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.873967 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl"] Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.877128 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74a79494-7611-49ab-9b32-167dbeba6bb6-operator-scripts\") pod \"novacell0dec4-account-delete-2b7mr\" (UID: \"74a79494-7611-49ab-9b32-167dbeba6bb6\") " pod="nova-kuttl-default/novacell0dec4-account-delete-2b7mr" Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.881454 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.881617 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podUID="60634ae6-20de-4c41-b4bf-0fceda1df7e5" containerName="nova-kuttl-cell1-conductor-conductor" containerID="cri-o://9417ed01719b61c92b4fcb5028120a0468f7bac0cd704d312ce33d3022cbce9e" gracePeriod=30 Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.891061 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-jnchl"] Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.922833 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f48w\" (UniqueName: \"kubernetes.io/projected/74a79494-7611-49ab-9b32-167dbeba6bb6-kube-api-access-5f48w\") pod \"novacell0dec4-account-delete-2b7mr\" (UID: \"74a79494-7611-49ab-9b32-167dbeba6bb6\") " pod="nova-kuttl-default/novacell0dec4-account-delete-2b7mr" Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.928364 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxlnq\" (UniqueName: \"kubernetes.io/projected/e62166aa-4f54-4eb0-aae1-69113a424df6-kube-api-access-wxlnq\") pod \"novaapi74fa-account-delete-hs5ds\" (UID: \"e62166aa-4f54-4eb0-aae1-69113a424df6\") " pod="nova-kuttl-default/novaapi74fa-account-delete-hs5ds" Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.939181 4775 generic.go:334] "Generic (PLEG): container finished" podID="1b50fc49-3582-416c-9b89-0de07e733931" containerID="f3fd1649a2aded52e00c39e1c1d72e905fd324149ec6f8d6ddfb00f2c288864e" exitCode=143 Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.939222 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"1b50fc49-3582-416c-9b89-0de07e733931","Type":"ContainerDied","Data":"f3fd1649a2aded52e00c39e1c1d72e905fd324149ec6f8d6ddfb00f2c288864e"} Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.972287 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2cdn\" (UniqueName: \"kubernetes.io/projected/2868ba1d-ce52-4e16-b1a5-f8a699c07b94-kube-api-access-b2cdn\") pod \"novacell1fcdd-account-delete-xg5hq\" (UID: \"2868ba1d-ce52-4e16-b1a5-f8a699c07b94\") " pod="nova-kuttl-default/novacell1fcdd-account-delete-xg5hq" Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.972341 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2868ba1d-ce52-4e16-b1a5-f8a699c07b94-operator-scripts\") pod \"novacell1fcdd-account-delete-xg5hq\" (UID: \"2868ba1d-ce52-4e16-b1a5-f8a699c07b94\") " pod="nova-kuttl-default/novacell1fcdd-account-delete-xg5hq" Jan 23 14:31:04 crc kubenswrapper[4775]: I0123 14:31:04.976302 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell0dec4-account-delete-2b7mr" Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.073835 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2cdn\" (UniqueName: \"kubernetes.io/projected/2868ba1d-ce52-4e16-b1a5-f8a699c07b94-kube-api-access-b2cdn\") pod \"novacell1fcdd-account-delete-xg5hq\" (UID: \"2868ba1d-ce52-4e16-b1a5-f8a699c07b94\") " pod="nova-kuttl-default/novacell1fcdd-account-delete-xg5hq" Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.074081 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2868ba1d-ce52-4e16-b1a5-f8a699c07b94-operator-scripts\") pod \"novacell1fcdd-account-delete-xg5hq\" (UID: \"2868ba1d-ce52-4e16-b1a5-f8a699c07b94\") " pod="nova-kuttl-default/novacell1fcdd-account-delete-xg5hq" Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.074954 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2868ba1d-ce52-4e16-b1a5-f8a699c07b94-operator-scripts\") pod \"novacell1fcdd-account-delete-xg5hq\" (UID: \"2868ba1d-ce52-4e16-b1a5-f8a699c07b94\") " pod="nova-kuttl-default/novacell1fcdd-account-delete-xg5hq" Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.100344 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2cdn\" (UniqueName: \"kubernetes.io/projected/2868ba1d-ce52-4e16-b1a5-f8a699c07b94-kube-api-access-b2cdn\") pod \"novacell1fcdd-account-delete-xg5hq\" (UID: \"2868ba1d-ce52-4e16-b1a5-f8a699c07b94\") " pod="nova-kuttl-default/novacell1fcdd-account-delete-xg5hq" Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.165875 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell1fcdd-account-delete-xg5hq" Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.218192 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapi74fa-account-delete-hs5ds" Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.391302 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell0dec4-account-delete-2b7mr"] Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.543929 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podUID="d6487ecc-f390-4837-8097-15e1b0bc28ac" containerName="nova-kuttl-cell1-novncproxy-novncproxy" probeResult="failure" output="Get \"http://10.217.0.129:6080/vnc_lite.html\": dial tcp 10.217.0.129:6080: connect: connection refused" Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.597722 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell1fcdd-account-delete-xg5hq"] Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.736253 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="470fdecf-a054-4735-90e9-82e8f2df7393" path="/var/lib/kubelet/pods/470fdecf-a054-4735-90e9-82e8f2df7393/volumes" Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.737022 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c069034-d3fc-478b-a45d-2d6c64baf640" path="/var/lib/kubelet/pods/5c069034-d3fc-478b-a45d-2d6c64baf640/volumes" Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.737545 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e6ea152-3ef9-4ed3-85c8-b6798fa8d084" path="/var/lib/kubelet/pods/5e6ea152-3ef9-4ed3-85c8-b6798fa8d084/volumes" Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.738087 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4b500f0-4005-40b9-a54d-0769cc8717f0" path="/var/lib/kubelet/pods/e4b500f0-4005-40b9-a54d-0769cc8717f0/volumes" Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.743249 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novaapi74fa-account-delete-hs5ds"] Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.893793 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.956307 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell1fcdd-account-delete-xg5hq" event={"ID":"2868ba1d-ce52-4e16-b1a5-f8a699c07b94","Type":"ContainerStarted","Data":"799ce1823863a3c15c53a4d22727a916392492bc10d370e2462dbc8b6ea31ac8"} Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.956385 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell1fcdd-account-delete-xg5hq" event={"ID":"2868ba1d-ce52-4e16-b1a5-f8a699c07b94","Type":"ContainerStarted","Data":"0d3e2cb601d2914db92f9a6a496a379ceafd3bfd20c4312448a83fd697cb56ef"} Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.961181 4775 generic.go:334] "Generic (PLEG): container finished" podID="74a79494-7611-49ab-9b32-167dbeba6bb6" containerID="4f1cabf38bb4ec4b946564e2b7accc422c82ed3dca66b33da4fca4b19d4c5643" exitCode=0 Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.961286 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell0dec4-account-delete-2b7mr" event={"ID":"74a79494-7611-49ab-9b32-167dbeba6bb6","Type":"ContainerDied","Data":"4f1cabf38bb4ec4b946564e2b7accc422c82ed3dca66b33da4fca4b19d4c5643"} Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.961319 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell0dec4-account-delete-2b7mr" event={"ID":"74a79494-7611-49ab-9b32-167dbeba6bb6","Type":"ContainerStarted","Data":"c3f23419eba8102b471ea95d077ddfa50f5c43e670169bf2430a062fd39be852"} Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.963587 4775 generic.go:334] "Generic (PLEG): container finished" podID="d6487ecc-f390-4837-8097-15e1b0bc28ac" containerID="e9cd293241d6fb23305cd22644b9ba266d18f24d704393111b6fac686f6c275a" exitCode=0 Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.963660 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"d6487ecc-f390-4837-8097-15e1b0bc28ac","Type":"ContainerDied","Data":"e9cd293241d6fb23305cd22644b9ba266d18f24d704393111b6fac686f6c275a"} Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.963680 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.963697 4775 scope.go:117] "RemoveContainer" containerID="e9cd293241d6fb23305cd22644b9ba266d18f24d704393111b6fac686f6c275a" Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.963686 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"d6487ecc-f390-4837-8097-15e1b0bc28ac","Type":"ContainerDied","Data":"f3a42cea8fd58140cfe12473c775a1de35761c7ed3cab47b52b03cbea0efb84b"} Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.971370 4775 generic.go:334] "Generic (PLEG): container finished" podID="40c54c9a-246a-4dab-af73-779d4d8539e4" containerID="92c8db5180b73a5bbb803a67b1485926a5904ee84f310a02f878949deb43649d" exitCode=143 Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.971486 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"40c54c9a-246a-4dab-af73-779d4d8539e4","Type":"ContainerDied","Data":"92c8db5180b73a5bbb803a67b1485926a5904ee84f310a02f878949deb43649d"} Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.972988 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/novacell1fcdd-account-delete-xg5hq" podStartSLOduration=1.972975326 podStartE2EDuration="1.972975326s" podCreationTimestamp="2026-01-23 14:31:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:31:05.970746537 +0000 UTC m=+1612.965575357" watchObservedRunningTime="2026-01-23 14:31:05.972975326 +0000 UTC m=+1612.967804066" Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.974074 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapi74fa-account-delete-hs5ds" event={"ID":"e62166aa-4f54-4eb0-aae1-69113a424df6","Type":"ContainerStarted","Data":"56c812b1ab00fd7b69cb6786223a7c5ead5a6096821beab6667bb79fc9b54916"} Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.987433 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msdnd\" (UniqueName: \"kubernetes.io/projected/d6487ecc-f390-4837-8097-15e1b0bc28ac-kube-api-access-msdnd\") pod \"d6487ecc-f390-4837-8097-15e1b0bc28ac\" (UID: \"d6487ecc-f390-4837-8097-15e1b0bc28ac\") " Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.987519 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6487ecc-f390-4837-8097-15e1b0bc28ac-config-data\") pod \"d6487ecc-f390-4837-8097-15e1b0bc28ac\" (UID: \"d6487ecc-f390-4837-8097-15e1b0bc28ac\") " Jan 23 14:31:05 crc kubenswrapper[4775]: I0123 14:31:05.993304 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6487ecc-f390-4837-8097-15e1b0bc28ac-kube-api-access-msdnd" (OuterVolumeSpecName: "kube-api-access-msdnd") pod "d6487ecc-f390-4837-8097-15e1b0bc28ac" (UID: "d6487ecc-f390-4837-8097-15e1b0bc28ac"). InnerVolumeSpecName "kube-api-access-msdnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.017023 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6487ecc-f390-4837-8097-15e1b0bc28ac-config-data" (OuterVolumeSpecName: "config-data") pod "d6487ecc-f390-4837-8097-15e1b0bc28ac" (UID: "d6487ecc-f390-4837-8097-15e1b0bc28ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.074169 4775 scope.go:117] "RemoveContainer" containerID="e9cd293241d6fb23305cd22644b9ba266d18f24d704393111b6fac686f6c275a" Jan 23 14:31:06 crc kubenswrapper[4775]: E0123 14:31:06.074926 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9cd293241d6fb23305cd22644b9ba266d18f24d704393111b6fac686f6c275a\": container with ID starting with e9cd293241d6fb23305cd22644b9ba266d18f24d704393111b6fac686f6c275a not found: ID does not exist" containerID="e9cd293241d6fb23305cd22644b9ba266d18f24d704393111b6fac686f6c275a" Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.074992 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9cd293241d6fb23305cd22644b9ba266d18f24d704393111b6fac686f6c275a"} err="failed to get container status \"e9cd293241d6fb23305cd22644b9ba266d18f24d704393111b6fac686f6c275a\": rpc error: code = NotFound desc = could not find container \"e9cd293241d6fb23305cd22644b9ba266d18f24d704393111b6fac686f6c275a\": container with ID starting with e9cd293241d6fb23305cd22644b9ba266d18f24d704393111b6fac686f6c275a not found: ID does not exist" Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.090314 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msdnd\" (UniqueName: \"kubernetes.io/projected/d6487ecc-f390-4837-8097-15e1b0bc28ac-kube-api-access-msdnd\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.090344 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6487ecc-f390-4837-8097-15e1b0bc28ac-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:06 crc kubenswrapper[4775]: E0123 14:31:06.161776 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ab7afc6184df7a26515289f0daca80ac0daabcd95529ee2de4b1ba321ce191e3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 14:31:06 crc kubenswrapper[4775]: E0123 14:31:06.163541 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ab7afc6184df7a26515289f0daca80ac0daabcd95529ee2de4b1ba321ce191e3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 14:31:06 crc kubenswrapper[4775]: E0123 14:31:06.169079 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ab7afc6184df7a26515289f0daca80ac0daabcd95529ee2de4b1ba321ce191e3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 14:31:06 crc kubenswrapper[4775]: E0123 14:31:06.169133 4775 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="3e96bb87-5923-457f-bf02-51a1182e90bc" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.332437 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.340279 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.364446 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.496726 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pp587\" (UniqueName: \"kubernetes.io/projected/60634ae6-20de-4c41-b4bf-0fceda1df7e5-kube-api-access-pp587\") pod \"60634ae6-20de-4c41-b4bf-0fceda1df7e5\" (UID: \"60634ae6-20de-4c41-b4bf-0fceda1df7e5\") " Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.496973 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60634ae6-20de-4c41-b4bf-0fceda1df7e5-config-data\") pod \"60634ae6-20de-4c41-b4bf-0fceda1df7e5\" (UID: \"60634ae6-20de-4c41-b4bf-0fceda1df7e5\") " Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.501553 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60634ae6-20de-4c41-b4bf-0fceda1df7e5-kube-api-access-pp587" (OuterVolumeSpecName: "kube-api-access-pp587") pod "60634ae6-20de-4c41-b4bf-0fceda1df7e5" (UID: "60634ae6-20de-4c41-b4bf-0fceda1df7e5"). InnerVolumeSpecName "kube-api-access-pp587". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.527892 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60634ae6-20de-4c41-b4bf-0fceda1df7e5-config-data" (OuterVolumeSpecName: "config-data") pod "60634ae6-20de-4c41-b4bf-0fceda1df7e5" (UID: "60634ae6-20de-4c41-b4bf-0fceda1df7e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.599560 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60634ae6-20de-4c41-b4bf-0fceda1df7e5-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.599616 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pp587\" (UniqueName: \"kubernetes.io/projected/60634ae6-20de-4c41-b4bf-0fceda1df7e5-kube-api-access-pp587\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.984690 4775 generic.go:334] "Generic (PLEG): container finished" podID="e62166aa-4f54-4eb0-aae1-69113a424df6" containerID="7683bb31e0e3c33c12802ae8ef8cb905ee4053a0b8cff940fda829caf0802a6a" exitCode=0 Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.984746 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapi74fa-account-delete-hs5ds" event={"ID":"e62166aa-4f54-4eb0-aae1-69113a424df6","Type":"ContainerDied","Data":"7683bb31e0e3c33c12802ae8ef8cb905ee4053a0b8cff940fda829caf0802a6a"} Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.986331 4775 generic.go:334] "Generic (PLEG): container finished" podID="2868ba1d-ce52-4e16-b1a5-f8a699c07b94" containerID="799ce1823863a3c15c53a4d22727a916392492bc10d370e2462dbc8b6ea31ac8" exitCode=0 Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.986373 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell1fcdd-account-delete-xg5hq" event={"ID":"2868ba1d-ce52-4e16-b1a5-f8a699c07b94","Type":"ContainerDied","Data":"799ce1823863a3c15c53a4d22727a916392492bc10d370e2462dbc8b6ea31ac8"} Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.988423 4775 generic.go:334] "Generic (PLEG): container finished" podID="60634ae6-20de-4c41-b4bf-0fceda1df7e5" containerID="9417ed01719b61c92b4fcb5028120a0468f7bac0cd704d312ce33d3022cbce9e" exitCode=0 Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.988454 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.988474 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"60634ae6-20de-4c41-b4bf-0fceda1df7e5","Type":"ContainerDied","Data":"9417ed01719b61c92b4fcb5028120a0468f7bac0cd704d312ce33d3022cbce9e"} Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.988511 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"60634ae6-20de-4c41-b4bf-0fceda1df7e5","Type":"ContainerDied","Data":"d8f1f0f6e7f62499789debda728a77acf84ec6f7e20d7816daa6f9e8b8134f7b"} Jan 23 14:31:06 crc kubenswrapper[4775]: I0123 14:31:06.988534 4775 scope.go:117] "RemoveContainer" containerID="9417ed01719b61c92b4fcb5028120a0468f7bac0cd704d312ce33d3022cbce9e" Jan 23 14:31:07 crc kubenswrapper[4775]: I0123 14:31:07.016098 4775 scope.go:117] "RemoveContainer" containerID="9417ed01719b61c92b4fcb5028120a0468f7bac0cd704d312ce33d3022cbce9e" Jan 23 14:31:07 crc kubenswrapper[4775]: E0123 14:31:07.018060 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9417ed01719b61c92b4fcb5028120a0468f7bac0cd704d312ce33d3022cbce9e\": container with ID starting with 9417ed01719b61c92b4fcb5028120a0468f7bac0cd704d312ce33d3022cbce9e not found: ID does not exist" containerID="9417ed01719b61c92b4fcb5028120a0468f7bac0cd704d312ce33d3022cbce9e" Jan 23 14:31:07 crc kubenswrapper[4775]: I0123 14:31:07.018107 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9417ed01719b61c92b4fcb5028120a0468f7bac0cd704d312ce33d3022cbce9e"} err="failed to get container status \"9417ed01719b61c92b4fcb5028120a0468f7bac0cd704d312ce33d3022cbce9e\": rpc error: code = NotFound desc = could not find container \"9417ed01719b61c92b4fcb5028120a0468f7bac0cd704d312ce33d3022cbce9e\": container with ID starting with 9417ed01719b61c92b4fcb5028120a0468f7bac0cd704d312ce33d3022cbce9e not found: ID does not exist" Jan 23 14:31:07 crc kubenswrapper[4775]: I0123 14:31:07.049290 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:31:07 crc kubenswrapper[4775]: I0123 14:31:07.057199 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:31:07 crc kubenswrapper[4775]: I0123 14:31:07.351748 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell0dec4-account-delete-2b7mr" Jan 23 14:31:07 crc kubenswrapper[4775]: I0123 14:31:07.512366 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74a79494-7611-49ab-9b32-167dbeba6bb6-operator-scripts\") pod \"74a79494-7611-49ab-9b32-167dbeba6bb6\" (UID: \"74a79494-7611-49ab-9b32-167dbeba6bb6\") " Jan 23 14:31:07 crc kubenswrapper[4775]: I0123 14:31:07.513103 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5f48w\" (UniqueName: \"kubernetes.io/projected/74a79494-7611-49ab-9b32-167dbeba6bb6-kube-api-access-5f48w\") pod \"74a79494-7611-49ab-9b32-167dbeba6bb6\" (UID: \"74a79494-7611-49ab-9b32-167dbeba6bb6\") " Jan 23 14:31:07 crc kubenswrapper[4775]: I0123 14:31:07.513996 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74a79494-7611-49ab-9b32-167dbeba6bb6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "74a79494-7611-49ab-9b32-167dbeba6bb6" (UID: "74a79494-7611-49ab-9b32-167dbeba6bb6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:31:07 crc kubenswrapper[4775]: I0123 14:31:07.521248 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74a79494-7611-49ab-9b32-167dbeba6bb6-kube-api-access-5f48w" (OuterVolumeSpecName: "kube-api-access-5f48w") pod "74a79494-7611-49ab-9b32-167dbeba6bb6" (UID: "74a79494-7611-49ab-9b32-167dbeba6bb6"). InnerVolumeSpecName "kube-api-access-5f48w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:31:07 crc kubenswrapper[4775]: I0123 14:31:07.615943 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/74a79494-7611-49ab-9b32-167dbeba6bb6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:07 crc kubenswrapper[4775]: I0123 14:31:07.615989 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5f48w\" (UniqueName: \"kubernetes.io/projected/74a79494-7611-49ab-9b32-167dbeba6bb6-kube-api-access-5f48w\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:07 crc kubenswrapper[4775]: I0123 14:31:07.714299 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:31:07 crc kubenswrapper[4775]: E0123 14:31:07.714854 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:31:07 crc kubenswrapper[4775]: I0123 14:31:07.727163 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60634ae6-20de-4c41-b4bf-0fceda1df7e5" path="/var/lib/kubelet/pods/60634ae6-20de-4c41-b4bf-0fceda1df7e5/volumes" Jan 23 14:31:07 crc kubenswrapper[4775]: I0123 14:31:07.728181 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6487ecc-f390-4837-8097-15e1b0bc28ac" path="/var/lib/kubelet/pods/d6487ecc-f390-4837-8097-15e1b0bc28ac/volumes" Jan 23 14:31:07 crc kubenswrapper[4775]: I0123 14:31:07.999270 4775 generic.go:334] "Generic (PLEG): container finished" podID="5c5ea649-3ec6-4684-a543-92cbb2561c2c" containerID="0fc3116ad5e11a579023342a2bde7e94e9992b7817bc89662a590eddceef91c7" exitCode=0 Jan 23 14:31:07 crc kubenswrapper[4775]: I0123 14:31:07.999389 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"5c5ea649-3ec6-4684-a543-92cbb2561c2c","Type":"ContainerDied","Data":"0fc3116ad5e11a579023342a2bde7e94e9992b7817bc89662a590eddceef91c7"} Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.003269 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell0dec4-account-delete-2b7mr" event={"ID":"74a79494-7611-49ab-9b32-167dbeba6bb6","Type":"ContainerDied","Data":"c3f23419eba8102b471ea95d077ddfa50f5c43e670169bf2430a062fd39be852"} Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.003316 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3f23419eba8102b471ea95d077ddfa50f5c43e670169bf2430a062fd39be852" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.003327 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell0dec4-account-delete-2b7mr" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.113054 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.226575 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c5ea649-3ec6-4684-a543-92cbb2561c2c-config-data\") pod \"5c5ea649-3ec6-4684-a543-92cbb2561c2c\" (UID: \"5c5ea649-3ec6-4684-a543-92cbb2561c2c\") " Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.230008 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qk566\" (UniqueName: \"kubernetes.io/projected/5c5ea649-3ec6-4684-a543-92cbb2561c2c-kube-api-access-qk566\") pod \"5c5ea649-3ec6-4684-a543-92cbb2561c2c\" (UID: \"5c5ea649-3ec6-4684-a543-92cbb2561c2c\") " Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.252182 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c5ea649-3ec6-4684-a543-92cbb2561c2c-kube-api-access-qk566" (OuterVolumeSpecName: "kube-api-access-qk566") pod "5c5ea649-3ec6-4684-a543-92cbb2561c2c" (UID: "5c5ea649-3ec6-4684-a543-92cbb2561c2c"). InnerVolumeSpecName "kube-api-access-qk566". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.267144 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c5ea649-3ec6-4684-a543-92cbb2561c2c-config-data" (OuterVolumeSpecName: "config-data") pod "5c5ea649-3ec6-4684-a543-92cbb2561c2c" (UID: "5c5ea649-3ec6-4684-a543-92cbb2561c2c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.332176 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c5ea649-3ec6-4684-a543-92cbb2561c2c-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.332210 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qk566\" (UniqueName: \"kubernetes.io/projected/5c5ea649-3ec6-4684-a543-92cbb2561c2c-kube-api-access-qk566\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.446402 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell1fcdd-account-delete-xg5hq" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.536327 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2cdn\" (UniqueName: \"kubernetes.io/projected/2868ba1d-ce52-4e16-b1a5-f8a699c07b94-kube-api-access-b2cdn\") pod \"2868ba1d-ce52-4e16-b1a5-f8a699c07b94\" (UID: \"2868ba1d-ce52-4e16-b1a5-f8a699c07b94\") " Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.536376 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2868ba1d-ce52-4e16-b1a5-f8a699c07b94-operator-scripts\") pod \"2868ba1d-ce52-4e16-b1a5-f8a699c07b94\" (UID: \"2868ba1d-ce52-4e16-b1a5-f8a699c07b94\") " Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.537016 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2868ba1d-ce52-4e16-b1a5-f8a699c07b94-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2868ba1d-ce52-4e16-b1a5-f8a699c07b94" (UID: "2868ba1d-ce52-4e16-b1a5-f8a699c07b94"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.540739 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2868ba1d-ce52-4e16-b1a5-f8a699c07b94-kube-api-access-b2cdn" (OuterVolumeSpecName: "kube-api-access-b2cdn") pod "2868ba1d-ce52-4e16-b1a5-f8a699c07b94" (UID: "2868ba1d-ce52-4e16-b1a5-f8a699c07b94"). InnerVolumeSpecName "kube-api-access-b2cdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.574943 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapi74fa-account-delete-hs5ds" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.605406 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.648127 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2cdn\" (UniqueName: \"kubernetes.io/projected/2868ba1d-ce52-4e16-b1a5-f8a699c07b94-kube-api-access-b2cdn\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.648168 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2868ba1d-ce52-4e16-b1a5-f8a699c07b94-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.749640 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxwzv\" (UniqueName: \"kubernetes.io/projected/1b50fc49-3582-416c-9b89-0de07e733931-kube-api-access-nxwzv\") pod \"1b50fc49-3582-416c-9b89-0de07e733931\" (UID: \"1b50fc49-3582-416c-9b89-0de07e733931\") " Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.749752 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b50fc49-3582-416c-9b89-0de07e733931-logs\") pod \"1b50fc49-3582-416c-9b89-0de07e733931\" (UID: \"1b50fc49-3582-416c-9b89-0de07e733931\") " Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.749823 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e62166aa-4f54-4eb0-aae1-69113a424df6-operator-scripts\") pod \"e62166aa-4f54-4eb0-aae1-69113a424df6\" (UID: \"e62166aa-4f54-4eb0-aae1-69113a424df6\") " Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.749862 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b50fc49-3582-416c-9b89-0de07e733931-config-data\") pod \"1b50fc49-3582-416c-9b89-0de07e733931\" (UID: \"1b50fc49-3582-416c-9b89-0de07e733931\") " Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.749934 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxlnq\" (UniqueName: \"kubernetes.io/projected/e62166aa-4f54-4eb0-aae1-69113a424df6-kube-api-access-wxlnq\") pod \"e62166aa-4f54-4eb0-aae1-69113a424df6\" (UID: \"e62166aa-4f54-4eb0-aae1-69113a424df6\") " Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.750443 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e62166aa-4f54-4eb0-aae1-69113a424df6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e62166aa-4f54-4eb0-aae1-69113a424df6" (UID: "e62166aa-4f54-4eb0-aae1-69113a424df6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.750533 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b50fc49-3582-416c-9b89-0de07e733931-logs" (OuterVolumeSpecName: "logs") pod "1b50fc49-3582-416c-9b89-0de07e733931" (UID: "1b50fc49-3582-416c-9b89-0de07e733931"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.753453 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b50fc49-3582-416c-9b89-0de07e733931-kube-api-access-nxwzv" (OuterVolumeSpecName: "kube-api-access-nxwzv") pod "1b50fc49-3582-416c-9b89-0de07e733931" (UID: "1b50fc49-3582-416c-9b89-0de07e733931"). InnerVolumeSpecName "kube-api-access-nxwzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.756374 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e62166aa-4f54-4eb0-aae1-69113a424df6-kube-api-access-wxlnq" (OuterVolumeSpecName: "kube-api-access-wxlnq") pod "e62166aa-4f54-4eb0-aae1-69113a424df6" (UID: "e62166aa-4f54-4eb0-aae1-69113a424df6"). InnerVolumeSpecName "kube-api-access-wxlnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.775514 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b50fc49-3582-416c-9b89-0de07e733931-config-data" (OuterVolumeSpecName: "config-data") pod "1b50fc49-3582-416c-9b89-0de07e733931" (UID: "1b50fc49-3582-416c-9b89-0de07e733931"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.776028 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.851516 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b50fc49-3582-416c-9b89-0de07e733931-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.851551 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxlnq\" (UniqueName: \"kubernetes.io/projected/e62166aa-4f54-4eb0-aae1-69113a424df6-kube-api-access-wxlnq\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.851564 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxwzv\" (UniqueName: \"kubernetes.io/projected/1b50fc49-3582-416c-9b89-0de07e733931-kube-api-access-nxwzv\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.851575 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b50fc49-3582-416c-9b89-0de07e733931-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.851587 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e62166aa-4f54-4eb0-aae1-69113a424df6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.953264 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40c54c9a-246a-4dab-af73-779d4d8539e4-logs\") pod \"40c54c9a-246a-4dab-af73-779d4d8539e4\" (UID: \"40c54c9a-246a-4dab-af73-779d4d8539e4\") " Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.953539 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgxwb\" (UniqueName: \"kubernetes.io/projected/40c54c9a-246a-4dab-af73-779d4d8539e4-kube-api-access-vgxwb\") pod \"40c54c9a-246a-4dab-af73-779d4d8539e4\" (UID: \"40c54c9a-246a-4dab-af73-779d4d8539e4\") " Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.953596 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40c54c9a-246a-4dab-af73-779d4d8539e4-config-data\") pod \"40c54c9a-246a-4dab-af73-779d4d8539e4\" (UID: \"40c54c9a-246a-4dab-af73-779d4d8539e4\") " Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.954730 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40c54c9a-246a-4dab-af73-779d4d8539e4-logs" (OuterVolumeSpecName: "logs") pod "40c54c9a-246a-4dab-af73-779d4d8539e4" (UID: "40c54c9a-246a-4dab-af73-779d4d8539e4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.958228 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40c54c9a-246a-4dab-af73-779d4d8539e4-kube-api-access-vgxwb" (OuterVolumeSpecName: "kube-api-access-vgxwb") pod "40c54c9a-246a-4dab-af73-779d4d8539e4" (UID: "40c54c9a-246a-4dab-af73-779d4d8539e4"). InnerVolumeSpecName "kube-api-access-vgxwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:31:08 crc kubenswrapper[4775]: I0123 14:31:08.996640 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40c54c9a-246a-4dab-af73-779d4d8539e4-config-data" (OuterVolumeSpecName: "config-data") pod "40c54c9a-246a-4dab-af73-779d4d8539e4" (UID: "40c54c9a-246a-4dab-af73-779d4d8539e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.018018 4775 generic.go:334] "Generic (PLEG): container finished" podID="40c54c9a-246a-4dab-af73-779d4d8539e4" containerID="19f64885adeeb673d9cba11e78c8b70596ea5a7795eddab4d7f824f5be3cd3c6" exitCode=0 Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.018108 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"40c54c9a-246a-4dab-af73-779d4d8539e4","Type":"ContainerDied","Data":"19f64885adeeb673d9cba11e78c8b70596ea5a7795eddab4d7f824f5be3cd3c6"} Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.018538 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"40c54c9a-246a-4dab-af73-779d4d8539e4","Type":"ContainerDied","Data":"cc067c426dd03351b5a8a8591d3c2c83477c0b5d51ea784970cfb53f7e6d267e"} Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.018138 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.018873 4775 scope.go:117] "RemoveContainer" containerID="19f64885adeeb673d9cba11e78c8b70596ea5a7795eddab4d7f824f5be3cd3c6" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.023136 4775 generic.go:334] "Generic (PLEG): container finished" podID="1b50fc49-3582-416c-9b89-0de07e733931" containerID="3b2c4fa8ecf48ebe29b25c30f72c2762525e314644186ec94469e2e547873058" exitCode=0 Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.023272 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"1b50fc49-3582-416c-9b89-0de07e733931","Type":"ContainerDied","Data":"3b2c4fa8ecf48ebe29b25c30f72c2762525e314644186ec94469e2e547873058"} Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.023312 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"1b50fc49-3582-416c-9b89-0de07e733931","Type":"ContainerDied","Data":"0da722dd90642caf85fa0f11331565aec51183c8f53f1cf43b2602bc06530edf"} Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.023570 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.029230 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapi74fa-account-delete-hs5ds" event={"ID":"e62166aa-4f54-4eb0-aae1-69113a424df6","Type":"ContainerDied","Data":"56c812b1ab00fd7b69cb6786223a7c5ead5a6096821beab6667bb79fc9b54916"} Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.029294 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56c812b1ab00fd7b69cb6786223a7c5ead5a6096821beab6667bb79fc9b54916" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.029309 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapi74fa-account-delete-hs5ds" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.031494 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"5c5ea649-3ec6-4684-a543-92cbb2561c2c","Type":"ContainerDied","Data":"aae6c41a06b90b700f10ac781242a8cc1f26c49368ae3d0b71804b4f7c54253a"} Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.031525 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.035361 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell1fcdd-account-delete-xg5hq" event={"ID":"2868ba1d-ce52-4e16-b1a5-f8a699c07b94","Type":"ContainerDied","Data":"0d3e2cb601d2914db92f9a6a496a379ceafd3bfd20c4312448a83fd697cb56ef"} Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.035443 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d3e2cb601d2914db92f9a6a496a379ceafd3bfd20c4312448a83fd697cb56ef" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.035550 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell1fcdd-account-delete-xg5hq" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.055909 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgxwb\" (UniqueName: \"kubernetes.io/projected/40c54c9a-246a-4dab-af73-779d4d8539e4-kube-api-access-vgxwb\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.055942 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40c54c9a-246a-4dab-af73-779d4d8539e4-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.055956 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40c54c9a-246a-4dab-af73-779d4d8539e4-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.107039 4775 scope.go:117] "RemoveContainer" containerID="92c8db5180b73a5bbb803a67b1485926a5904ee84f310a02f878949deb43649d" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.132946 4775 scope.go:117] "RemoveContainer" containerID="19f64885adeeb673d9cba11e78c8b70596ea5a7795eddab4d7f824f5be3cd3c6" Jan 23 14:31:09 crc kubenswrapper[4775]: E0123 14:31:09.134466 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19f64885adeeb673d9cba11e78c8b70596ea5a7795eddab4d7f824f5be3cd3c6\": container with ID starting with 19f64885adeeb673d9cba11e78c8b70596ea5a7795eddab4d7f824f5be3cd3c6 not found: ID does not exist" containerID="19f64885adeeb673d9cba11e78c8b70596ea5a7795eddab4d7f824f5be3cd3c6" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.134551 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19f64885adeeb673d9cba11e78c8b70596ea5a7795eddab4d7f824f5be3cd3c6"} err="failed to get container status \"19f64885adeeb673d9cba11e78c8b70596ea5a7795eddab4d7f824f5be3cd3c6\": rpc error: code = NotFound desc = could not find container \"19f64885adeeb673d9cba11e78c8b70596ea5a7795eddab4d7f824f5be3cd3c6\": container with ID starting with 19f64885adeeb673d9cba11e78c8b70596ea5a7795eddab4d7f824f5be3cd3c6 not found: ID does not exist" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.134587 4775 scope.go:117] "RemoveContainer" containerID="92c8db5180b73a5bbb803a67b1485926a5904ee84f310a02f878949deb43649d" Jan 23 14:31:09 crc kubenswrapper[4775]: E0123 14:31:09.135297 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92c8db5180b73a5bbb803a67b1485926a5904ee84f310a02f878949deb43649d\": container with ID starting with 92c8db5180b73a5bbb803a67b1485926a5904ee84f310a02f878949deb43649d not found: ID does not exist" containerID="92c8db5180b73a5bbb803a67b1485926a5904ee84f310a02f878949deb43649d" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.135351 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92c8db5180b73a5bbb803a67b1485926a5904ee84f310a02f878949deb43649d"} err="failed to get container status \"92c8db5180b73a5bbb803a67b1485926a5904ee84f310a02f878949deb43649d\": rpc error: code = NotFound desc = could not find container \"92c8db5180b73a5bbb803a67b1485926a5904ee84f310a02f878949deb43649d\": container with ID starting with 92c8db5180b73a5bbb803a67b1485926a5904ee84f310a02f878949deb43649d not found: ID does not exist" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.135405 4775 scope.go:117] "RemoveContainer" containerID="3b2c4fa8ecf48ebe29b25c30f72c2762525e314644186ec94469e2e547873058" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.138488 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.150839 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.164250 4775 scope.go:117] "RemoveContainer" containerID="f3fd1649a2aded52e00c39e1c1d72e905fd324149ec6f8d6ddfb00f2c288864e" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.166377 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.174920 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.183322 4775 scope.go:117] "RemoveContainer" containerID="3b2c4fa8ecf48ebe29b25c30f72c2762525e314644186ec94469e2e547873058" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.183659 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:31:09 crc kubenswrapper[4775]: E0123 14:31:09.184018 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b2c4fa8ecf48ebe29b25c30f72c2762525e314644186ec94469e2e547873058\": container with ID starting with 3b2c4fa8ecf48ebe29b25c30f72c2762525e314644186ec94469e2e547873058 not found: ID does not exist" containerID="3b2c4fa8ecf48ebe29b25c30f72c2762525e314644186ec94469e2e547873058" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.184172 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b2c4fa8ecf48ebe29b25c30f72c2762525e314644186ec94469e2e547873058"} err="failed to get container status \"3b2c4fa8ecf48ebe29b25c30f72c2762525e314644186ec94469e2e547873058\": rpc error: code = NotFound desc = could not find container \"3b2c4fa8ecf48ebe29b25c30f72c2762525e314644186ec94469e2e547873058\": container with ID starting with 3b2c4fa8ecf48ebe29b25c30f72c2762525e314644186ec94469e2e547873058 not found: ID does not exist" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.184335 4775 scope.go:117] "RemoveContainer" containerID="f3fd1649a2aded52e00c39e1c1d72e905fd324149ec6f8d6ddfb00f2c288864e" Jan 23 14:31:09 crc kubenswrapper[4775]: E0123 14:31:09.184863 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3fd1649a2aded52e00c39e1c1d72e905fd324149ec6f8d6ddfb00f2c288864e\": container with ID starting with f3fd1649a2aded52e00c39e1c1d72e905fd324149ec6f8d6ddfb00f2c288864e not found: ID does not exist" containerID="f3fd1649a2aded52e00c39e1c1d72e905fd324149ec6f8d6ddfb00f2c288864e" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.184930 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3fd1649a2aded52e00c39e1c1d72e905fd324149ec6f8d6ddfb00f2c288864e"} err="failed to get container status \"f3fd1649a2aded52e00c39e1c1d72e905fd324149ec6f8d6ddfb00f2c288864e\": rpc error: code = NotFound desc = could not find container \"f3fd1649a2aded52e00c39e1c1d72e905fd324149ec6f8d6ddfb00f2c288864e\": container with ID starting with f3fd1649a2aded52e00c39e1c1d72e905fd324149ec6f8d6ddfb00f2c288864e not found: ID does not exist" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.184974 4775 scope.go:117] "RemoveContainer" containerID="0fc3116ad5e11a579023342a2bde7e94e9992b7817bc89662a590eddceef91c7" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.191801 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.697037 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-nvvdc"] Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.704791 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-nvvdc"] Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.731246 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b50fc49-3582-416c-9b89-0de07e733931" path="/var/lib/kubelet/pods/1b50fc49-3582-416c-9b89-0de07e733931/volumes" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.732279 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40c54c9a-246a-4dab-af73-779d4d8539e4" path="/var/lib/kubelet/pods/40c54c9a-246a-4dab-af73-779d4d8539e4/volumes" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.733353 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c5ea649-3ec6-4684-a543-92cbb2561c2c" path="/var/lib/kubelet/pods/5c5ea649-3ec6-4684-a543-92cbb2561c2c/volumes" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.734779 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4" path="/var/lib/kubelet/pods/f46b7c09-6e8e-47ac-b6a0-b42237c9f5a4/volumes" Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.735656 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novacell0dec4-account-delete-2b7mr"] Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.735685 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-dec4-account-create-update-thscn"] Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.742929 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novacell0dec4-account-delete-2b7mr"] Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.749683 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-dec4-account-create-update-thscn"] Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.816062 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-q4r8h"] Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.826530 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-q4r8h"] Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.842413 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-fcdd-account-create-update-58ttw"] Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.853274 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novacell1fcdd-account-delete-xg5hq"] Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.862240 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-fcdd-account-create-update-58ttw"] Jan 23 14:31:09 crc kubenswrapper[4775]: I0123 14:31:09.871732 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novacell1fcdd-account-delete-xg5hq"] Jan 23 14:31:11 crc kubenswrapper[4775]: E0123 14:31:11.163308 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ab7afc6184df7a26515289f0daca80ac0daabcd95529ee2de4b1ba321ce191e3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 14:31:11 crc kubenswrapper[4775]: E0123 14:31:11.165543 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ab7afc6184df7a26515289f0daca80ac0daabcd95529ee2de4b1ba321ce191e3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 14:31:11 crc kubenswrapper[4775]: E0123 14:31:11.167396 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ab7afc6184df7a26515289f0daca80ac0daabcd95529ee2de4b1ba321ce191e3" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 14:31:11 crc kubenswrapper[4775]: E0123 14:31:11.167461 4775 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="3e96bb87-5923-457f-bf02-51a1182e90bc" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:31:11 crc kubenswrapper[4775]: I0123 14:31:11.735227 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26928cf5-7a29-4fab-a501-5746726fc42a" path="/var/lib/kubelet/pods/26928cf5-7a29-4fab-a501-5746726fc42a/volumes" Jan 23 14:31:11 crc kubenswrapper[4775]: I0123 14:31:11.736657 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2868ba1d-ce52-4e16-b1a5-f8a699c07b94" path="/var/lib/kubelet/pods/2868ba1d-ce52-4e16-b1a5-f8a699c07b94/volumes" Jan 23 14:31:11 crc kubenswrapper[4775]: I0123 14:31:11.738051 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5980f4a0-814a-4f66-b637-80071a62061b" path="/var/lib/kubelet/pods/5980f4a0-814a-4f66-b637-80071a62061b/volumes" Jan 23 14:31:11 crc kubenswrapper[4775]: I0123 14:31:11.739292 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74a79494-7611-49ab-9b32-167dbeba6bb6" path="/var/lib/kubelet/pods/74a79494-7611-49ab-9b32-167dbeba6bb6/volumes" Jan 23 14:31:11 crc kubenswrapper[4775]: I0123 14:31:11.741589 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cce1ea66-c6e5-41e7-b0fc-f915fab736f9" path="/var/lib/kubelet/pods/cce1ea66-c6e5-41e7-b0fc-f915fab736f9/volumes" Jan 23 14:31:14 crc kubenswrapper[4775]: I0123 14:31:14.577985 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-db-create-4dbx9"] Jan 23 14:31:14 crc kubenswrapper[4775]: I0123 14:31:14.595711 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-db-create-4dbx9"] Jan 23 14:31:14 crc kubenswrapper[4775]: I0123 14:31:14.606110 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-74fa-account-create-update-r8n42"] Jan 23 14:31:14 crc kubenswrapper[4775]: I0123 14:31:14.618227 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novaapi74fa-account-delete-hs5ds"] Jan 23 14:31:14 crc kubenswrapper[4775]: I0123 14:31:14.629553 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-74fa-account-create-update-r8n42"] Jan 23 14:31:14 crc kubenswrapper[4775]: I0123 14:31:14.635155 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novaapi74fa-account-delete-hs5ds"] Jan 23 14:31:15 crc kubenswrapper[4775]: I0123 14:31:15.026842 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:31:15 crc kubenswrapper[4775]: I0123 14:31:15.099217 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e96bb87-5923-457f-bf02-51a1182e90bc-config-data\") pod \"3e96bb87-5923-457f-bf02-51a1182e90bc\" (UID: \"3e96bb87-5923-457f-bf02-51a1182e90bc\") " Jan 23 14:31:15 crc kubenswrapper[4775]: I0123 14:31:15.100164 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj4np\" (UniqueName: \"kubernetes.io/projected/3e96bb87-5923-457f-bf02-51a1182e90bc-kube-api-access-pj4np\") pod \"3e96bb87-5923-457f-bf02-51a1182e90bc\" (UID: \"3e96bb87-5923-457f-bf02-51a1182e90bc\") " Jan 23 14:31:15 crc kubenswrapper[4775]: I0123 14:31:15.101532 4775 generic.go:334] "Generic (PLEG): container finished" podID="3e96bb87-5923-457f-bf02-51a1182e90bc" containerID="ab7afc6184df7a26515289f0daca80ac0daabcd95529ee2de4b1ba321ce191e3" exitCode=0 Jan 23 14:31:15 crc kubenswrapper[4775]: I0123 14:31:15.101577 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"3e96bb87-5923-457f-bf02-51a1182e90bc","Type":"ContainerDied","Data":"ab7afc6184df7a26515289f0daca80ac0daabcd95529ee2de4b1ba321ce191e3"} Jan 23 14:31:15 crc kubenswrapper[4775]: I0123 14:31:15.101604 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"3e96bb87-5923-457f-bf02-51a1182e90bc","Type":"ContainerDied","Data":"5bbc8cbd22e1e763806e59239a30a31f8865fb7589db1e6ad2f16cc53daa3460"} Jan 23 14:31:15 crc kubenswrapper[4775]: I0123 14:31:15.101620 4775 scope.go:117] "RemoveContainer" containerID="ab7afc6184df7a26515289f0daca80ac0daabcd95529ee2de4b1ba321ce191e3" Jan 23 14:31:15 crc kubenswrapper[4775]: I0123 14:31:15.101736 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:31:15 crc kubenswrapper[4775]: I0123 14:31:15.105275 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e96bb87-5923-457f-bf02-51a1182e90bc-kube-api-access-pj4np" (OuterVolumeSpecName: "kube-api-access-pj4np") pod "3e96bb87-5923-457f-bf02-51a1182e90bc" (UID: "3e96bb87-5923-457f-bf02-51a1182e90bc"). InnerVolumeSpecName "kube-api-access-pj4np". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:31:15 crc kubenswrapper[4775]: I0123 14:31:15.138946 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e96bb87-5923-457f-bf02-51a1182e90bc-config-data" (OuterVolumeSpecName: "config-data") pod "3e96bb87-5923-457f-bf02-51a1182e90bc" (UID: "3e96bb87-5923-457f-bf02-51a1182e90bc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:31:15 crc kubenswrapper[4775]: I0123 14:31:15.142264 4775 scope.go:117] "RemoveContainer" containerID="ab7afc6184df7a26515289f0daca80ac0daabcd95529ee2de4b1ba321ce191e3" Jan 23 14:31:15 crc kubenswrapper[4775]: E0123 14:31:15.142966 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab7afc6184df7a26515289f0daca80ac0daabcd95529ee2de4b1ba321ce191e3\": container with ID starting with ab7afc6184df7a26515289f0daca80ac0daabcd95529ee2de4b1ba321ce191e3 not found: ID does not exist" containerID="ab7afc6184df7a26515289f0daca80ac0daabcd95529ee2de4b1ba321ce191e3" Jan 23 14:31:15 crc kubenswrapper[4775]: I0123 14:31:15.143006 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab7afc6184df7a26515289f0daca80ac0daabcd95529ee2de4b1ba321ce191e3"} err="failed to get container status \"ab7afc6184df7a26515289f0daca80ac0daabcd95529ee2de4b1ba321ce191e3\": rpc error: code = NotFound desc = could not find container \"ab7afc6184df7a26515289f0daca80ac0daabcd95529ee2de4b1ba321ce191e3\": container with ID starting with ab7afc6184df7a26515289f0daca80ac0daabcd95529ee2de4b1ba321ce191e3 not found: ID does not exist" Jan 23 14:31:15 crc kubenswrapper[4775]: I0123 14:31:15.203625 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e96bb87-5923-457f-bf02-51a1182e90bc-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:15 crc kubenswrapper[4775]: I0123 14:31:15.203662 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj4np\" (UniqueName: \"kubernetes.io/projected/3e96bb87-5923-457f-bf02-51a1182e90bc-kube-api-access-pj4np\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:15 crc kubenswrapper[4775]: I0123 14:31:15.431982 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:31:15 crc kubenswrapper[4775]: I0123 14:31:15.437889 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:31:15 crc kubenswrapper[4775]: I0123 14:31:15.731636 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e96bb87-5923-457f-bf02-51a1182e90bc" path="/var/lib/kubelet/pods/3e96bb87-5923-457f-bf02-51a1182e90bc/volumes" Jan 23 14:31:15 crc kubenswrapper[4775]: I0123 14:31:15.732743 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ed8da8c-1d52-44a3-b1c8-b68000003d91" path="/var/lib/kubelet/pods/9ed8da8c-1d52-44a3-b1c8-b68000003d91/volumes" Jan 23 14:31:15 crc kubenswrapper[4775]: I0123 14:31:15.734047 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff" path="/var/lib/kubelet/pods/cdce4e03-ab75-4cf0-ae3c-8a9fff7ee6ff/volumes" Jan 23 14:31:15 crc kubenswrapper[4775]: I0123 14:31:15.735010 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e62166aa-4f54-4eb0-aae1-69113a424df6" path="/var/lib/kubelet/pods/e62166aa-4f54-4eb0-aae1-69113a424df6/volumes" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.015975 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-db-create-hn7kx"] Jan 23 14:31:17 crc kubenswrapper[4775]: E0123 14:31:17.016577 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74a79494-7611-49ab-9b32-167dbeba6bb6" containerName="mariadb-account-delete" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.016591 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="74a79494-7611-49ab-9b32-167dbeba6bb6" containerName="mariadb-account-delete" Jan 23 14:31:17 crc kubenswrapper[4775]: E0123 14:31:17.016601 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e62166aa-4f54-4eb0-aae1-69113a424df6" containerName="mariadb-account-delete" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.016610 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="e62166aa-4f54-4eb0-aae1-69113a424df6" containerName="mariadb-account-delete" Jan 23 14:31:17 crc kubenswrapper[4775]: E0123 14:31:17.016626 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6487ecc-f390-4837-8097-15e1b0bc28ac" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.016635 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6487ecc-f390-4837-8097-15e1b0bc28ac" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 23 14:31:17 crc kubenswrapper[4775]: E0123 14:31:17.016656 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40c54c9a-246a-4dab-af73-779d4d8539e4" containerName="nova-kuttl-api-log" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.016664 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="40c54c9a-246a-4dab-af73-779d4d8539e4" containerName="nova-kuttl-api-log" Jan 23 14:31:17 crc kubenswrapper[4775]: E0123 14:31:17.016676 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40c54c9a-246a-4dab-af73-779d4d8539e4" containerName="nova-kuttl-api-api" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.016684 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="40c54c9a-246a-4dab-af73-779d4d8539e4" containerName="nova-kuttl-api-api" Jan 23 14:31:17 crc kubenswrapper[4775]: E0123 14:31:17.016701 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c5ea649-3ec6-4684-a543-92cbb2561c2c" containerName="nova-kuttl-cell0-conductor-conductor" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.016710 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c5ea649-3ec6-4684-a543-92cbb2561c2c" containerName="nova-kuttl-cell0-conductor-conductor" Jan 23 14:31:17 crc kubenswrapper[4775]: E0123 14:31:17.016724 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e96bb87-5923-457f-bf02-51a1182e90bc" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.016732 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e96bb87-5923-457f-bf02-51a1182e90bc" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:31:17 crc kubenswrapper[4775]: E0123 14:31:17.016748 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b50fc49-3582-416c-9b89-0de07e733931" containerName="nova-kuttl-metadata-log" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.016756 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b50fc49-3582-416c-9b89-0de07e733931" containerName="nova-kuttl-metadata-log" Jan 23 14:31:17 crc kubenswrapper[4775]: E0123 14:31:17.016768 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2868ba1d-ce52-4e16-b1a5-f8a699c07b94" containerName="mariadb-account-delete" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.016776 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="2868ba1d-ce52-4e16-b1a5-f8a699c07b94" containerName="mariadb-account-delete" Jan 23 14:31:17 crc kubenswrapper[4775]: E0123 14:31:17.016790 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60634ae6-20de-4c41-b4bf-0fceda1df7e5" containerName="nova-kuttl-cell1-conductor-conductor" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.016816 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="60634ae6-20de-4c41-b4bf-0fceda1df7e5" containerName="nova-kuttl-cell1-conductor-conductor" Jan 23 14:31:17 crc kubenswrapper[4775]: E0123 14:31:17.016832 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b50fc49-3582-416c-9b89-0de07e733931" containerName="nova-kuttl-metadata-metadata" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.016840 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b50fc49-3582-416c-9b89-0de07e733931" containerName="nova-kuttl-metadata-metadata" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.017017 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b50fc49-3582-416c-9b89-0de07e733931" containerName="nova-kuttl-metadata-metadata" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.017036 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="40c54c9a-246a-4dab-af73-779d4d8539e4" containerName="nova-kuttl-api-log" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.017050 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6487ecc-f390-4837-8097-15e1b0bc28ac" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.017061 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e96bb87-5923-457f-bf02-51a1182e90bc" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.017074 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="60634ae6-20de-4c41-b4bf-0fceda1df7e5" containerName="nova-kuttl-cell1-conductor-conductor" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.017087 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="74a79494-7611-49ab-9b32-167dbeba6bb6" containerName="mariadb-account-delete" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.017101 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b50fc49-3582-416c-9b89-0de07e733931" containerName="nova-kuttl-metadata-log" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.017113 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="2868ba1d-ce52-4e16-b1a5-f8a699c07b94" containerName="mariadb-account-delete" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.017125 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c5ea649-3ec6-4684-a543-92cbb2561c2c" containerName="nova-kuttl-cell0-conductor-conductor" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.017136 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="40c54c9a-246a-4dab-af73-779d4d8539e4" containerName="nova-kuttl-api-api" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.017150 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="e62166aa-4f54-4eb0-aae1-69113a424df6" containerName="mariadb-account-delete" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.017730 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-hn7kx" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.032901 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-hn7kx"] Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.113851 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-bp7mf"] Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.114718 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-bp7mf" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.123323 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-bp7mf"] Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.140439 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a0d129e-9a65-484c-b8a6-ca5a0120d95d-operator-scripts\") pod \"nova-api-db-create-hn7kx\" (UID: \"5a0d129e-9a65-484c-b8a6-ca5a0120d95d\") " pod="nova-kuttl-default/nova-api-db-create-hn7kx" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.140703 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gvsp\" (UniqueName: \"kubernetes.io/projected/5a0d129e-9a65-484c-b8a6-ca5a0120d95d-kube-api-access-8gvsp\") pod \"nova-api-db-create-hn7kx\" (UID: \"5a0d129e-9a65-484c-b8a6-ca5a0120d95d\") " pod="nova-kuttl-default/nova-api-db-create-hn7kx" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.242500 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b494b92-3cd1-4b60-853c-a135bb158d8c-operator-scripts\") pod \"nova-cell0-db-create-bp7mf\" (UID: \"5b494b92-3cd1-4b60-853c-a135bb158d8c\") " pod="nova-kuttl-default/nova-cell0-db-create-bp7mf" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.242591 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gvsp\" (UniqueName: \"kubernetes.io/projected/5a0d129e-9a65-484c-b8a6-ca5a0120d95d-kube-api-access-8gvsp\") pod \"nova-api-db-create-hn7kx\" (UID: \"5a0d129e-9a65-484c-b8a6-ca5a0120d95d\") " pod="nova-kuttl-default/nova-api-db-create-hn7kx" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.242636 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a0d129e-9a65-484c-b8a6-ca5a0120d95d-operator-scripts\") pod \"nova-api-db-create-hn7kx\" (UID: \"5a0d129e-9a65-484c-b8a6-ca5a0120d95d\") " pod="nova-kuttl-default/nova-api-db-create-hn7kx" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.242680 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bmj8\" (UniqueName: \"kubernetes.io/projected/5b494b92-3cd1-4b60-853c-a135bb158d8c-kube-api-access-9bmj8\") pod \"nova-cell0-db-create-bp7mf\" (UID: \"5b494b92-3cd1-4b60-853c-a135bb158d8c\") " pod="nova-kuttl-default/nova-cell0-db-create-bp7mf" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.243726 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a0d129e-9a65-484c-b8a6-ca5a0120d95d-operator-scripts\") pod \"nova-api-db-create-hn7kx\" (UID: \"5a0d129e-9a65-484c-b8a6-ca5a0120d95d\") " pod="nova-kuttl-default/nova-api-db-create-hn7kx" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.258671 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gvsp\" (UniqueName: \"kubernetes.io/projected/5a0d129e-9a65-484c-b8a6-ca5a0120d95d-kube-api-access-8gvsp\") pod \"nova-api-db-create-hn7kx\" (UID: \"5a0d129e-9a65-484c-b8a6-ca5a0120d95d\") " pod="nova-kuttl-default/nova-api-db-create-hn7kx" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.344632 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bmj8\" (UniqueName: \"kubernetes.io/projected/5b494b92-3cd1-4b60-853c-a135bb158d8c-kube-api-access-9bmj8\") pod \"nova-cell0-db-create-bp7mf\" (UID: \"5b494b92-3cd1-4b60-853c-a135bb158d8c\") " pod="nova-kuttl-default/nova-cell0-db-create-bp7mf" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.344701 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b494b92-3cd1-4b60-853c-a135bb158d8c-operator-scripts\") pod \"nova-cell0-db-create-bp7mf\" (UID: \"5b494b92-3cd1-4b60-853c-a135bb158d8c\") " pod="nova-kuttl-default/nova-cell0-db-create-bp7mf" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.345427 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b494b92-3cd1-4b60-853c-a135bb158d8c-operator-scripts\") pod \"nova-cell0-db-create-bp7mf\" (UID: \"5b494b92-3cd1-4b60-853c-a135bb158d8c\") " pod="nova-kuttl-default/nova-cell0-db-create-bp7mf" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.351147 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-hn7kx" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.359096 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bmj8\" (UniqueName: \"kubernetes.io/projected/5b494b92-3cd1-4b60-853c-a135bb158d8c-kube-api-access-9bmj8\") pod \"nova-cell0-db-create-bp7mf\" (UID: \"5b494b92-3cd1-4b60-853c-a135bb158d8c\") " pod="nova-kuttl-default/nova-cell0-db-create-bp7mf" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.432605 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-bp7mf" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.676973 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-ba32-account-create-update-8xsh6"] Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.678451 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-ba32-account-create-update-8xsh6" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.680437 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell1-db-secret" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.701064 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-6ec2-account-create-update-6ntlz"] Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.702705 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-6ec2-account-create-update-6ntlz" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.704614 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell0-db-secret" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.739822 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-pmc6n"] Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.741270 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-9a1c-account-create-update-lmjgw"] Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.745636 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-9a1c-account-create-update-lmjgw" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.745712 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-pmc6n" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.748684 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-api-db-secret" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.748875 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-ba32-account-create-update-8xsh6"] Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.777095 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-6ec2-account-create-update-6ntlz"] Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.794997 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-9a1c-account-create-update-lmjgw"] Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.826896 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-pmc6n"] Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.852329 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6tpb\" (UniqueName: \"kubernetes.io/projected/ffe262ed-6f79-4dad-91c6-168b164a6459-kube-api-access-l6tpb\") pod \"nova-cell1-ba32-account-create-update-8xsh6\" (UID: \"ffe262ed-6f79-4dad-91c6-168b164a6459\") " pod="nova-kuttl-default/nova-cell1-ba32-account-create-update-8xsh6" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.852473 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmlbl\" (UniqueName: \"kubernetes.io/projected/68223c6c-51af-4369-87c2-368ffe71edb7-kube-api-access-qmlbl\") pod \"nova-api-9a1c-account-create-update-lmjgw\" (UID: \"68223c6c-51af-4369-87c2-368ffe71edb7\") " pod="nova-kuttl-default/nova-api-9a1c-account-create-update-lmjgw" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.852536 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/500dfca1-a7c0-488c-89ba-2d750245e322-operator-scripts\") pod \"nova-cell0-6ec2-account-create-update-6ntlz\" (UID: \"500dfca1-a7c0-488c-89ba-2d750245e322\") " pod="nova-kuttl-default/nova-cell0-6ec2-account-create-update-6ntlz" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.852592 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9857104-b2d2-4b42-a96d-2f9f1fadc406-operator-scripts\") pod \"nova-cell1-db-create-pmc6n\" (UID: \"a9857104-b2d2-4b42-a96d-2f9f1fadc406\") " pod="nova-kuttl-default/nova-cell1-db-create-pmc6n" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.852667 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmrfj\" (UniqueName: \"kubernetes.io/projected/a9857104-b2d2-4b42-a96d-2f9f1fadc406-kube-api-access-dmrfj\") pod \"nova-cell1-db-create-pmc6n\" (UID: \"a9857104-b2d2-4b42-a96d-2f9f1fadc406\") " pod="nova-kuttl-default/nova-cell1-db-create-pmc6n" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.852751 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr54f\" (UniqueName: \"kubernetes.io/projected/500dfca1-a7c0-488c-89ba-2d750245e322-kube-api-access-mr54f\") pod \"nova-cell0-6ec2-account-create-update-6ntlz\" (UID: \"500dfca1-a7c0-488c-89ba-2d750245e322\") " pod="nova-kuttl-default/nova-cell0-6ec2-account-create-update-6ntlz" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.852780 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68223c6c-51af-4369-87c2-368ffe71edb7-operator-scripts\") pod \"nova-api-9a1c-account-create-update-lmjgw\" (UID: \"68223c6c-51af-4369-87c2-368ffe71edb7\") " pod="nova-kuttl-default/nova-api-9a1c-account-create-update-lmjgw" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.852830 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffe262ed-6f79-4dad-91c6-168b164a6459-operator-scripts\") pod \"nova-cell1-ba32-account-create-update-8xsh6\" (UID: \"ffe262ed-6f79-4dad-91c6-168b164a6459\") " pod="nova-kuttl-default/nova-cell1-ba32-account-create-update-8xsh6" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.954839 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmrfj\" (UniqueName: \"kubernetes.io/projected/a9857104-b2d2-4b42-a96d-2f9f1fadc406-kube-api-access-dmrfj\") pod \"nova-cell1-db-create-pmc6n\" (UID: \"a9857104-b2d2-4b42-a96d-2f9f1fadc406\") " pod="nova-kuttl-default/nova-cell1-db-create-pmc6n" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.954906 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr54f\" (UniqueName: \"kubernetes.io/projected/500dfca1-a7c0-488c-89ba-2d750245e322-kube-api-access-mr54f\") pod \"nova-cell0-6ec2-account-create-update-6ntlz\" (UID: \"500dfca1-a7c0-488c-89ba-2d750245e322\") " pod="nova-kuttl-default/nova-cell0-6ec2-account-create-update-6ntlz" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.954929 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68223c6c-51af-4369-87c2-368ffe71edb7-operator-scripts\") pod \"nova-api-9a1c-account-create-update-lmjgw\" (UID: \"68223c6c-51af-4369-87c2-368ffe71edb7\") " pod="nova-kuttl-default/nova-api-9a1c-account-create-update-lmjgw" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.954954 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffe262ed-6f79-4dad-91c6-168b164a6459-operator-scripts\") pod \"nova-cell1-ba32-account-create-update-8xsh6\" (UID: \"ffe262ed-6f79-4dad-91c6-168b164a6459\") " pod="nova-kuttl-default/nova-cell1-ba32-account-create-update-8xsh6" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.955019 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6tpb\" (UniqueName: \"kubernetes.io/projected/ffe262ed-6f79-4dad-91c6-168b164a6459-kube-api-access-l6tpb\") pod \"nova-cell1-ba32-account-create-update-8xsh6\" (UID: \"ffe262ed-6f79-4dad-91c6-168b164a6459\") " pod="nova-kuttl-default/nova-cell1-ba32-account-create-update-8xsh6" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.955075 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmlbl\" (UniqueName: \"kubernetes.io/projected/68223c6c-51af-4369-87c2-368ffe71edb7-kube-api-access-qmlbl\") pod \"nova-api-9a1c-account-create-update-lmjgw\" (UID: \"68223c6c-51af-4369-87c2-368ffe71edb7\") " pod="nova-kuttl-default/nova-api-9a1c-account-create-update-lmjgw" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.955098 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/500dfca1-a7c0-488c-89ba-2d750245e322-operator-scripts\") pod \"nova-cell0-6ec2-account-create-update-6ntlz\" (UID: \"500dfca1-a7c0-488c-89ba-2d750245e322\") " pod="nova-kuttl-default/nova-cell0-6ec2-account-create-update-6ntlz" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.955143 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9857104-b2d2-4b42-a96d-2f9f1fadc406-operator-scripts\") pod \"nova-cell1-db-create-pmc6n\" (UID: \"a9857104-b2d2-4b42-a96d-2f9f1fadc406\") " pod="nova-kuttl-default/nova-cell1-db-create-pmc6n" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.955900 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68223c6c-51af-4369-87c2-368ffe71edb7-operator-scripts\") pod \"nova-api-9a1c-account-create-update-lmjgw\" (UID: \"68223c6c-51af-4369-87c2-368ffe71edb7\") " pod="nova-kuttl-default/nova-api-9a1c-account-create-update-lmjgw" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.955921 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9857104-b2d2-4b42-a96d-2f9f1fadc406-operator-scripts\") pod \"nova-cell1-db-create-pmc6n\" (UID: \"a9857104-b2d2-4b42-a96d-2f9f1fadc406\") " pod="nova-kuttl-default/nova-cell1-db-create-pmc6n" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.956739 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/500dfca1-a7c0-488c-89ba-2d750245e322-operator-scripts\") pod \"nova-cell0-6ec2-account-create-update-6ntlz\" (UID: \"500dfca1-a7c0-488c-89ba-2d750245e322\") " pod="nova-kuttl-default/nova-cell0-6ec2-account-create-update-6ntlz" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.960459 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffe262ed-6f79-4dad-91c6-168b164a6459-operator-scripts\") pod \"nova-cell1-ba32-account-create-update-8xsh6\" (UID: \"ffe262ed-6f79-4dad-91c6-168b164a6459\") " pod="nova-kuttl-default/nova-cell1-ba32-account-create-update-8xsh6" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.977538 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6tpb\" (UniqueName: \"kubernetes.io/projected/ffe262ed-6f79-4dad-91c6-168b164a6459-kube-api-access-l6tpb\") pod \"nova-cell1-ba32-account-create-update-8xsh6\" (UID: \"ffe262ed-6f79-4dad-91c6-168b164a6459\") " pod="nova-kuttl-default/nova-cell1-ba32-account-create-update-8xsh6" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.978480 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmrfj\" (UniqueName: \"kubernetes.io/projected/a9857104-b2d2-4b42-a96d-2f9f1fadc406-kube-api-access-dmrfj\") pod \"nova-cell1-db-create-pmc6n\" (UID: \"a9857104-b2d2-4b42-a96d-2f9f1fadc406\") " pod="nova-kuttl-default/nova-cell1-db-create-pmc6n" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.979386 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmlbl\" (UniqueName: \"kubernetes.io/projected/68223c6c-51af-4369-87c2-368ffe71edb7-kube-api-access-qmlbl\") pod \"nova-api-9a1c-account-create-update-lmjgw\" (UID: \"68223c6c-51af-4369-87c2-368ffe71edb7\") " pod="nova-kuttl-default/nova-api-9a1c-account-create-update-lmjgw" Jan 23 14:31:17 crc kubenswrapper[4775]: I0123 14:31:17.980961 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr54f\" (UniqueName: \"kubernetes.io/projected/500dfca1-a7c0-488c-89ba-2d750245e322-kube-api-access-mr54f\") pod \"nova-cell0-6ec2-account-create-update-6ntlz\" (UID: \"500dfca1-a7c0-488c-89ba-2d750245e322\") " pod="nova-kuttl-default/nova-cell0-6ec2-account-create-update-6ntlz" Jan 23 14:31:18 crc kubenswrapper[4775]: I0123 14:31:18.004052 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-ba32-account-create-update-8xsh6" Jan 23 14:31:18 crc kubenswrapper[4775]: I0123 14:31:18.034637 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-6ec2-account-create-update-6ntlz" Jan 23 14:31:18 crc kubenswrapper[4775]: I0123 14:31:18.083389 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-9a1c-account-create-update-lmjgw" Jan 23 14:31:18 crc kubenswrapper[4775]: I0123 14:31:18.109757 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-pmc6n" Jan 23 14:31:18 crc kubenswrapper[4775]: I0123 14:31:18.184816 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-hn7kx"] Jan 23 14:31:18 crc kubenswrapper[4775]: I0123 14:31:18.224878 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-bp7mf"] Jan 23 14:31:18 crc kubenswrapper[4775]: W0123 14:31:18.243150 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b494b92_3cd1_4b60_853c_a135bb158d8c.slice/crio-47ffa588ea1a3b82a372dfceb30239be86ea0caffc2aa0a0db10be661801863c WatchSource:0}: Error finding container 47ffa588ea1a3b82a372dfceb30239be86ea0caffc2aa0a0db10be661801863c: Status 404 returned error can't find the container with id 47ffa588ea1a3b82a372dfceb30239be86ea0caffc2aa0a0db10be661801863c Jan 23 14:31:18 crc kubenswrapper[4775]: I0123 14:31:18.503889 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-ba32-account-create-update-8xsh6"] Jan 23 14:31:18 crc kubenswrapper[4775]: W0123 14:31:18.509993 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffe262ed_6f79_4dad_91c6_168b164a6459.slice/crio-751e1cee2e4320249a99712955cae413a4aaa316d0f619d929e5cc3475e0f26c WatchSource:0}: Error finding container 751e1cee2e4320249a99712955cae413a4aaa316d0f619d929e5cc3475e0f26c: Status 404 returned error can't find the container with id 751e1cee2e4320249a99712955cae413a4aaa316d0f619d929e5cc3475e0f26c Jan 23 14:31:18 crc kubenswrapper[4775]: I0123 14:31:18.563470 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-6ec2-account-create-update-6ntlz"] Jan 23 14:31:18 crc kubenswrapper[4775]: W0123 14:31:18.566213 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod500dfca1_a7c0_488c_89ba_2d750245e322.slice/crio-da22143ba90138c99ec8cba553198721b3b3be69fb4543d78b581c3053b5210d WatchSource:0}: Error finding container da22143ba90138c99ec8cba553198721b3b3be69fb4543d78b581c3053b5210d: Status 404 returned error can't find the container with id da22143ba90138c99ec8cba553198721b3b3be69fb4543d78b581c3053b5210d Jan 23 14:31:18 crc kubenswrapper[4775]: I0123 14:31:18.639437 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-pmc6n"] Jan 23 14:31:18 crc kubenswrapper[4775]: I0123 14:31:18.645417 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-9a1c-account-create-update-lmjgw"] Jan 23 14:31:18 crc kubenswrapper[4775]: W0123 14:31:18.737943 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9857104_b2d2_4b42_a96d_2f9f1fadc406.slice/crio-04d088892f573bba06de570ea835fa2db3e9c0b65b6ba16999412892a8436a05 WatchSource:0}: Error finding container 04d088892f573bba06de570ea835fa2db3e9c0b65b6ba16999412892a8436a05: Status 404 returned error can't find the container with id 04d088892f573bba06de570ea835fa2db3e9c0b65b6ba16999412892a8436a05 Jan 23 14:31:18 crc kubenswrapper[4775]: W0123 14:31:18.739023 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68223c6c_51af_4369_87c2_368ffe71edb7.slice/crio-0dcbb282cf3ac3dfeca7f86ecefbdce648c845857e5b754636faff175d44a121 WatchSource:0}: Error finding container 0dcbb282cf3ac3dfeca7f86ecefbdce648c845857e5b754636faff175d44a121: Status 404 returned error can't find the container with id 0dcbb282cf3ac3dfeca7f86ecefbdce648c845857e5b754636faff175d44a121 Jan 23 14:31:19 crc kubenswrapper[4775]: I0123 14:31:19.142309 4775 generic.go:334] "Generic (PLEG): container finished" podID="a9857104-b2d2-4b42-a96d-2f9f1fadc406" containerID="6d2aa10a47d2fcb45e935313a220958ccb5ce5c86f680afa48a823e4a53178f0" exitCode=0 Jan 23 14:31:19 crc kubenswrapper[4775]: I0123 14:31:19.142422 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-pmc6n" event={"ID":"a9857104-b2d2-4b42-a96d-2f9f1fadc406","Type":"ContainerDied","Data":"6d2aa10a47d2fcb45e935313a220958ccb5ce5c86f680afa48a823e4a53178f0"} Jan 23 14:31:19 crc kubenswrapper[4775]: I0123 14:31:19.142462 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-pmc6n" event={"ID":"a9857104-b2d2-4b42-a96d-2f9f1fadc406","Type":"ContainerStarted","Data":"04d088892f573bba06de570ea835fa2db3e9c0b65b6ba16999412892a8436a05"} Jan 23 14:31:19 crc kubenswrapper[4775]: I0123 14:31:19.144874 4775 generic.go:334] "Generic (PLEG): container finished" podID="ffe262ed-6f79-4dad-91c6-168b164a6459" containerID="ecad2940c2ff1569920921fdd03a6c333edaa15c5f0818afcf6db854f924e5ab" exitCode=0 Jan 23 14:31:19 crc kubenswrapper[4775]: I0123 14:31:19.144975 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-ba32-account-create-update-8xsh6" event={"ID":"ffe262ed-6f79-4dad-91c6-168b164a6459","Type":"ContainerDied","Data":"ecad2940c2ff1569920921fdd03a6c333edaa15c5f0818afcf6db854f924e5ab"} Jan 23 14:31:19 crc kubenswrapper[4775]: I0123 14:31:19.145014 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-ba32-account-create-update-8xsh6" event={"ID":"ffe262ed-6f79-4dad-91c6-168b164a6459","Type":"ContainerStarted","Data":"751e1cee2e4320249a99712955cae413a4aaa316d0f619d929e5cc3475e0f26c"} Jan 23 14:31:19 crc kubenswrapper[4775]: I0123 14:31:19.147611 4775 generic.go:334] "Generic (PLEG): container finished" podID="5b494b92-3cd1-4b60-853c-a135bb158d8c" containerID="33a99232a0ae7d230c0ca5e3a7fcc4bde1520167a1ceba4a466d07976af3e8d1" exitCode=0 Jan 23 14:31:19 crc kubenswrapper[4775]: I0123 14:31:19.147700 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-bp7mf" event={"ID":"5b494b92-3cd1-4b60-853c-a135bb158d8c","Type":"ContainerDied","Data":"33a99232a0ae7d230c0ca5e3a7fcc4bde1520167a1ceba4a466d07976af3e8d1"} Jan 23 14:31:19 crc kubenswrapper[4775]: I0123 14:31:19.147728 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-bp7mf" event={"ID":"5b494b92-3cd1-4b60-853c-a135bb158d8c","Type":"ContainerStarted","Data":"47ffa588ea1a3b82a372dfceb30239be86ea0caffc2aa0a0db10be661801863c"} Jan 23 14:31:19 crc kubenswrapper[4775]: I0123 14:31:19.149725 4775 generic.go:334] "Generic (PLEG): container finished" podID="500dfca1-a7c0-488c-89ba-2d750245e322" containerID="46c83cc2befa55d2730e0306d1a537315368a038fa5d8e25f6f9a9178ae4909d" exitCode=0 Jan 23 14:31:19 crc kubenswrapper[4775]: I0123 14:31:19.149853 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-6ec2-account-create-update-6ntlz" event={"ID":"500dfca1-a7c0-488c-89ba-2d750245e322","Type":"ContainerDied","Data":"46c83cc2befa55d2730e0306d1a537315368a038fa5d8e25f6f9a9178ae4909d"} Jan 23 14:31:19 crc kubenswrapper[4775]: I0123 14:31:19.149893 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-6ec2-account-create-update-6ntlz" event={"ID":"500dfca1-a7c0-488c-89ba-2d750245e322","Type":"ContainerStarted","Data":"da22143ba90138c99ec8cba553198721b3b3be69fb4543d78b581c3053b5210d"} Jan 23 14:31:19 crc kubenswrapper[4775]: I0123 14:31:19.151629 4775 generic.go:334] "Generic (PLEG): container finished" podID="5a0d129e-9a65-484c-b8a6-ca5a0120d95d" containerID="36da3a3e665fb3823516d8d90857086698e0e37c43b293f38337204d81ca04a2" exitCode=0 Jan 23 14:31:19 crc kubenswrapper[4775]: I0123 14:31:19.151703 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-hn7kx" event={"ID":"5a0d129e-9a65-484c-b8a6-ca5a0120d95d","Type":"ContainerDied","Data":"36da3a3e665fb3823516d8d90857086698e0e37c43b293f38337204d81ca04a2"} Jan 23 14:31:19 crc kubenswrapper[4775]: I0123 14:31:19.151732 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-hn7kx" event={"ID":"5a0d129e-9a65-484c-b8a6-ca5a0120d95d","Type":"ContainerStarted","Data":"fe3c0428929ea5490420508d57fc508fae3beca204a6fe2065d2af142f3c5a26"} Jan 23 14:31:19 crc kubenswrapper[4775]: I0123 14:31:19.153935 4775 generic.go:334] "Generic (PLEG): container finished" podID="68223c6c-51af-4369-87c2-368ffe71edb7" containerID="552a75aff373d33848d323f4e1a099464b0ab75b386e7916291405fa3aa8b333" exitCode=0 Jan 23 14:31:19 crc kubenswrapper[4775]: I0123 14:31:19.153986 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-9a1c-account-create-update-lmjgw" event={"ID":"68223c6c-51af-4369-87c2-368ffe71edb7","Type":"ContainerDied","Data":"552a75aff373d33848d323f4e1a099464b0ab75b386e7916291405fa3aa8b333"} Jan 23 14:31:19 crc kubenswrapper[4775]: I0123 14:31:19.154017 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-9a1c-account-create-update-lmjgw" event={"ID":"68223c6c-51af-4369-87c2-368ffe71edb7","Type":"ContainerStarted","Data":"0dcbb282cf3ac3dfeca7f86ecefbdce648c845857e5b754636faff175d44a121"} Jan 23 14:31:19 crc kubenswrapper[4775]: I0123 14:31:19.714691 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:31:19 crc kubenswrapper[4775]: E0123 14:31:19.715203 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:31:20 crc kubenswrapper[4775]: I0123 14:31:20.663657 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-6ec2-account-create-update-6ntlz" Jan 23 14:31:20 crc kubenswrapper[4775]: I0123 14:31:20.799126 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mr54f\" (UniqueName: \"kubernetes.io/projected/500dfca1-a7c0-488c-89ba-2d750245e322-kube-api-access-mr54f\") pod \"500dfca1-a7c0-488c-89ba-2d750245e322\" (UID: \"500dfca1-a7c0-488c-89ba-2d750245e322\") " Jan 23 14:31:20 crc kubenswrapper[4775]: I0123 14:31:20.799470 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/500dfca1-a7c0-488c-89ba-2d750245e322-operator-scripts\") pod \"500dfca1-a7c0-488c-89ba-2d750245e322\" (UID: \"500dfca1-a7c0-488c-89ba-2d750245e322\") " Jan 23 14:31:20 crc kubenswrapper[4775]: I0123 14:31:20.800115 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/500dfca1-a7c0-488c-89ba-2d750245e322-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "500dfca1-a7c0-488c-89ba-2d750245e322" (UID: "500dfca1-a7c0-488c-89ba-2d750245e322"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:31:20 crc kubenswrapper[4775]: I0123 14:31:20.804590 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/500dfca1-a7c0-488c-89ba-2d750245e322-kube-api-access-mr54f" (OuterVolumeSpecName: "kube-api-access-mr54f") pod "500dfca1-a7c0-488c-89ba-2d750245e322" (UID: "500dfca1-a7c0-488c-89ba-2d750245e322"). InnerVolumeSpecName "kube-api-access-mr54f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:31:20 crc kubenswrapper[4775]: I0123 14:31:20.858597 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-9a1c-account-create-update-lmjgw" Jan 23 14:31:20 crc kubenswrapper[4775]: I0123 14:31:20.862726 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-hn7kx" Jan 23 14:31:20 crc kubenswrapper[4775]: I0123 14:31:20.867867 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-ba32-account-create-update-8xsh6" Jan 23 14:31:20 crc kubenswrapper[4775]: I0123 14:31:20.880238 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-bp7mf" Jan 23 14:31:20 crc kubenswrapper[4775]: I0123 14:31:20.884276 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-pmc6n" Jan 23 14:31:20 crc kubenswrapper[4775]: I0123 14:31:20.901847 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mr54f\" (UniqueName: \"kubernetes.io/projected/500dfca1-a7c0-488c-89ba-2d750245e322-kube-api-access-mr54f\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:20 crc kubenswrapper[4775]: I0123 14:31:20.901877 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/500dfca1-a7c0-488c-89ba-2d750245e322-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.003222 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b494b92-3cd1-4b60-853c-a135bb158d8c-operator-scripts\") pod \"5b494b92-3cd1-4b60-853c-a135bb158d8c\" (UID: \"5b494b92-3cd1-4b60-853c-a135bb158d8c\") " Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.004046 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b494b92-3cd1-4b60-853c-a135bb158d8c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5b494b92-3cd1-4b60-853c-a135bb158d8c" (UID: "5b494b92-3cd1-4b60-853c-a135bb158d8c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.004313 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9857104-b2d2-4b42-a96d-2f9f1fadc406-operator-scripts\") pod \"a9857104-b2d2-4b42-a96d-2f9f1fadc406\" (UID: \"a9857104-b2d2-4b42-a96d-2f9f1fadc406\") " Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.005022 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bmj8\" (UniqueName: \"kubernetes.io/projected/5b494b92-3cd1-4b60-853c-a135bb158d8c-kube-api-access-9bmj8\") pod \"5b494b92-3cd1-4b60-853c-a135bb158d8c\" (UID: \"5b494b92-3cd1-4b60-853c-a135bb158d8c\") " Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.005054 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9857104-b2d2-4b42-a96d-2f9f1fadc406-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a9857104-b2d2-4b42-a96d-2f9f1fadc406" (UID: "a9857104-b2d2-4b42-a96d-2f9f1fadc406"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.005123 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6tpb\" (UniqueName: \"kubernetes.io/projected/ffe262ed-6f79-4dad-91c6-168b164a6459-kube-api-access-l6tpb\") pod \"ffe262ed-6f79-4dad-91c6-168b164a6459\" (UID: \"ffe262ed-6f79-4dad-91c6-168b164a6459\") " Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.005188 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmrfj\" (UniqueName: \"kubernetes.io/projected/a9857104-b2d2-4b42-a96d-2f9f1fadc406-kube-api-access-dmrfj\") pod \"a9857104-b2d2-4b42-a96d-2f9f1fadc406\" (UID: \"a9857104-b2d2-4b42-a96d-2f9f1fadc406\") " Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.005253 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68223c6c-51af-4369-87c2-368ffe71edb7-operator-scripts\") pod \"68223c6c-51af-4369-87c2-368ffe71edb7\" (UID: \"68223c6c-51af-4369-87c2-368ffe71edb7\") " Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.005282 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gvsp\" (UniqueName: \"kubernetes.io/projected/5a0d129e-9a65-484c-b8a6-ca5a0120d95d-kube-api-access-8gvsp\") pod \"5a0d129e-9a65-484c-b8a6-ca5a0120d95d\" (UID: \"5a0d129e-9a65-484c-b8a6-ca5a0120d95d\") " Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.005317 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffe262ed-6f79-4dad-91c6-168b164a6459-operator-scripts\") pod \"ffe262ed-6f79-4dad-91c6-168b164a6459\" (UID: \"ffe262ed-6f79-4dad-91c6-168b164a6459\") " Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.005365 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmlbl\" (UniqueName: \"kubernetes.io/projected/68223c6c-51af-4369-87c2-368ffe71edb7-kube-api-access-qmlbl\") pod \"68223c6c-51af-4369-87c2-368ffe71edb7\" (UID: \"68223c6c-51af-4369-87c2-368ffe71edb7\") " Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.005408 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a0d129e-9a65-484c-b8a6-ca5a0120d95d-operator-scripts\") pod \"5a0d129e-9a65-484c-b8a6-ca5a0120d95d\" (UID: \"5a0d129e-9a65-484c-b8a6-ca5a0120d95d\") " Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.005892 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffe262ed-6f79-4dad-91c6-168b164a6459-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ffe262ed-6f79-4dad-91c6-168b164a6459" (UID: "ffe262ed-6f79-4dad-91c6-168b164a6459"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.005958 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5b494b92-3cd1-4b60-853c-a135bb158d8c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.005972 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9857104-b2d2-4b42-a96d-2f9f1fadc406-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.006112 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a0d129e-9a65-484c-b8a6-ca5a0120d95d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5a0d129e-9a65-484c-b8a6-ca5a0120d95d" (UID: "5a0d129e-9a65-484c-b8a6-ca5a0120d95d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.006357 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68223c6c-51af-4369-87c2-368ffe71edb7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "68223c6c-51af-4369-87c2-368ffe71edb7" (UID: "68223c6c-51af-4369-87c2-368ffe71edb7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.007620 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b494b92-3cd1-4b60-853c-a135bb158d8c-kube-api-access-9bmj8" (OuterVolumeSpecName: "kube-api-access-9bmj8") pod "5b494b92-3cd1-4b60-853c-a135bb158d8c" (UID: "5b494b92-3cd1-4b60-853c-a135bb158d8c"). InnerVolumeSpecName "kube-api-access-9bmj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.008100 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a0d129e-9a65-484c-b8a6-ca5a0120d95d-kube-api-access-8gvsp" (OuterVolumeSpecName: "kube-api-access-8gvsp") pod "5a0d129e-9a65-484c-b8a6-ca5a0120d95d" (UID: "5a0d129e-9a65-484c-b8a6-ca5a0120d95d"). InnerVolumeSpecName "kube-api-access-8gvsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.010240 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffe262ed-6f79-4dad-91c6-168b164a6459-kube-api-access-l6tpb" (OuterVolumeSpecName: "kube-api-access-l6tpb") pod "ffe262ed-6f79-4dad-91c6-168b164a6459" (UID: "ffe262ed-6f79-4dad-91c6-168b164a6459"). InnerVolumeSpecName "kube-api-access-l6tpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.010742 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68223c6c-51af-4369-87c2-368ffe71edb7-kube-api-access-qmlbl" (OuterVolumeSpecName: "kube-api-access-qmlbl") pod "68223c6c-51af-4369-87c2-368ffe71edb7" (UID: "68223c6c-51af-4369-87c2-368ffe71edb7"). InnerVolumeSpecName "kube-api-access-qmlbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.012909 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9857104-b2d2-4b42-a96d-2f9f1fadc406-kube-api-access-dmrfj" (OuterVolumeSpecName: "kube-api-access-dmrfj") pod "a9857104-b2d2-4b42-a96d-2f9f1fadc406" (UID: "a9857104-b2d2-4b42-a96d-2f9f1fadc406"). InnerVolumeSpecName "kube-api-access-dmrfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.107211 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5a0d129e-9a65-484c-b8a6-ca5a0120d95d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.107246 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bmj8\" (UniqueName: \"kubernetes.io/projected/5b494b92-3cd1-4b60-853c-a135bb158d8c-kube-api-access-9bmj8\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.107272 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6tpb\" (UniqueName: \"kubernetes.io/projected/ffe262ed-6f79-4dad-91c6-168b164a6459-kube-api-access-l6tpb\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.107284 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmrfj\" (UniqueName: \"kubernetes.io/projected/a9857104-b2d2-4b42-a96d-2f9f1fadc406-kube-api-access-dmrfj\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.107297 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/68223c6c-51af-4369-87c2-368ffe71edb7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.107308 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gvsp\" (UniqueName: \"kubernetes.io/projected/5a0d129e-9a65-484c-b8a6-ca5a0120d95d-kube-api-access-8gvsp\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.107319 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ffe262ed-6f79-4dad-91c6-168b164a6459-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.107329 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmlbl\" (UniqueName: \"kubernetes.io/projected/68223c6c-51af-4369-87c2-368ffe71edb7-kube-api-access-qmlbl\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.181109 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-bp7mf" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.181139 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-bp7mf" event={"ID":"5b494b92-3cd1-4b60-853c-a135bb158d8c","Type":"ContainerDied","Data":"47ffa588ea1a3b82a372dfceb30239be86ea0caffc2aa0a0db10be661801863c"} Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.181186 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47ffa588ea1a3b82a372dfceb30239be86ea0caffc2aa0a0db10be661801863c" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.183905 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-6ec2-account-create-update-6ntlz" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.183896 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-6ec2-account-create-update-6ntlz" event={"ID":"500dfca1-a7c0-488c-89ba-2d750245e322","Type":"ContainerDied","Data":"da22143ba90138c99ec8cba553198721b3b3be69fb4543d78b581c3053b5210d"} Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.184096 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da22143ba90138c99ec8cba553198721b3b3be69fb4543d78b581c3053b5210d" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.186106 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-hn7kx" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.186111 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-hn7kx" event={"ID":"5a0d129e-9a65-484c-b8a6-ca5a0120d95d","Type":"ContainerDied","Data":"fe3c0428929ea5490420508d57fc508fae3beca204a6fe2065d2af142f3c5a26"} Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.186260 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe3c0428929ea5490420508d57fc508fae3beca204a6fe2065d2af142f3c5a26" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.188093 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-9a1c-account-create-update-lmjgw" event={"ID":"68223c6c-51af-4369-87c2-368ffe71edb7","Type":"ContainerDied","Data":"0dcbb282cf3ac3dfeca7f86ecefbdce648c845857e5b754636faff175d44a121"} Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.188142 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0dcbb282cf3ac3dfeca7f86ecefbdce648c845857e5b754636faff175d44a121" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.188156 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-9a1c-account-create-update-lmjgw" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.190120 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-pmc6n" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.190148 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-pmc6n" event={"ID":"a9857104-b2d2-4b42-a96d-2f9f1fadc406","Type":"ContainerDied","Data":"04d088892f573bba06de570ea835fa2db3e9c0b65b6ba16999412892a8436a05"} Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.190258 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04d088892f573bba06de570ea835fa2db3e9c0b65b6ba16999412892a8436a05" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.192312 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-ba32-account-create-update-8xsh6" event={"ID":"ffe262ed-6f79-4dad-91c6-168b164a6459","Type":"ContainerDied","Data":"751e1cee2e4320249a99712955cae413a4aaa316d0f619d929e5cc3475e0f26c"} Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.192351 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="751e1cee2e4320249a99712955cae413a4aaa316d0f619d929e5cc3475e0f26c" Jan 23 14:31:21 crc kubenswrapper[4775]: I0123 14:31:21.192384 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-ba32-account-create-update-8xsh6" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.668610 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l"] Jan 23 14:31:22 crc kubenswrapper[4775]: E0123 14:31:22.669086 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a0d129e-9a65-484c-b8a6-ca5a0120d95d" containerName="mariadb-database-create" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.669107 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a0d129e-9a65-484c-b8a6-ca5a0120d95d" containerName="mariadb-database-create" Jan 23 14:31:22 crc kubenswrapper[4775]: E0123 14:31:22.669128 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9857104-b2d2-4b42-a96d-2f9f1fadc406" containerName="mariadb-database-create" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.669140 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9857104-b2d2-4b42-a96d-2f9f1fadc406" containerName="mariadb-database-create" Jan 23 14:31:22 crc kubenswrapper[4775]: E0123 14:31:22.669175 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b494b92-3cd1-4b60-853c-a135bb158d8c" containerName="mariadb-database-create" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.669187 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b494b92-3cd1-4b60-853c-a135bb158d8c" containerName="mariadb-database-create" Jan 23 14:31:22 crc kubenswrapper[4775]: E0123 14:31:22.669206 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68223c6c-51af-4369-87c2-368ffe71edb7" containerName="mariadb-account-create-update" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.669218 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="68223c6c-51af-4369-87c2-368ffe71edb7" containerName="mariadb-account-create-update" Jan 23 14:31:22 crc kubenswrapper[4775]: E0123 14:31:22.669238 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffe262ed-6f79-4dad-91c6-168b164a6459" containerName="mariadb-account-create-update" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.669250 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffe262ed-6f79-4dad-91c6-168b164a6459" containerName="mariadb-account-create-update" Jan 23 14:31:22 crc kubenswrapper[4775]: E0123 14:31:22.669277 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="500dfca1-a7c0-488c-89ba-2d750245e322" containerName="mariadb-account-create-update" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.669291 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="500dfca1-a7c0-488c-89ba-2d750245e322" containerName="mariadb-account-create-update" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.669525 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a0d129e-9a65-484c-b8a6-ca5a0120d95d" containerName="mariadb-database-create" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.669551 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="68223c6c-51af-4369-87c2-368ffe71edb7" containerName="mariadb-account-create-update" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.669569 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="500dfca1-a7c0-488c-89ba-2d750245e322" containerName="mariadb-account-create-update" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.669590 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b494b92-3cd1-4b60-853c-a135bb158d8c" containerName="mariadb-database-create" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.669610 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9857104-b2d2-4b42-a96d-2f9f1fadc406" containerName="mariadb-database-create" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.669631 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffe262ed-6f79-4dad-91c6-168b164a6459" containerName="mariadb-account-create-update" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.670418 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.672606 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-scripts" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.677706 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.678340 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-nova-kuttl-dockercfg-v6hs6" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.694473 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l"] Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.838019 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b47b9373-0dd5-4635-a8f9-06aa0fc60174-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-lcg7l\" (UID: \"b47b9373-0dd5-4635-a8f9-06aa0fc60174\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.838092 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b47b9373-0dd5-4635-a8f9-06aa0fc60174-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-lcg7l\" (UID: \"b47b9373-0dd5-4635-a8f9-06aa0fc60174\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.838505 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ctxr\" (UniqueName: \"kubernetes.io/projected/b47b9373-0dd5-4635-a8f9-06aa0fc60174-kube-api-access-5ctxr\") pod \"nova-kuttl-cell0-conductor-db-sync-lcg7l\" (UID: \"b47b9373-0dd5-4635-a8f9-06aa0fc60174\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.941037 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b47b9373-0dd5-4635-a8f9-06aa0fc60174-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-lcg7l\" (UID: \"b47b9373-0dd5-4635-a8f9-06aa0fc60174\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.941143 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b47b9373-0dd5-4635-a8f9-06aa0fc60174-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-lcg7l\" (UID: \"b47b9373-0dd5-4635-a8f9-06aa0fc60174\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.941374 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ctxr\" (UniqueName: \"kubernetes.io/projected/b47b9373-0dd5-4635-a8f9-06aa0fc60174-kube-api-access-5ctxr\") pod \"nova-kuttl-cell0-conductor-db-sync-lcg7l\" (UID: \"b47b9373-0dd5-4635-a8f9-06aa0fc60174\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.946223 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b47b9373-0dd5-4635-a8f9-06aa0fc60174-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-lcg7l\" (UID: \"b47b9373-0dd5-4635-a8f9-06aa0fc60174\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.946852 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b47b9373-0dd5-4635-a8f9-06aa0fc60174-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-lcg7l\" (UID: \"b47b9373-0dd5-4635-a8f9-06aa0fc60174\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l" Jan 23 14:31:22 crc kubenswrapper[4775]: I0123 14:31:22.971194 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ctxr\" (UniqueName: \"kubernetes.io/projected/b47b9373-0dd5-4635-a8f9-06aa0fc60174-kube-api-access-5ctxr\") pod \"nova-kuttl-cell0-conductor-db-sync-lcg7l\" (UID: \"b47b9373-0dd5-4635-a8f9-06aa0fc60174\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l" Jan 23 14:31:23 crc kubenswrapper[4775]: I0123 14:31:23.030419 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l" Jan 23 14:31:23 crc kubenswrapper[4775]: I0123 14:31:23.534344 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l"] Jan 23 14:31:23 crc kubenswrapper[4775]: W0123 14:31:23.550109 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb47b9373_0dd5_4635_a8f9_06aa0fc60174.slice/crio-791c1fb139c2bde38dc6ec8b899268a93625a220432e70aaf0f5560c0277102a WatchSource:0}: Error finding container 791c1fb139c2bde38dc6ec8b899268a93625a220432e70aaf0f5560c0277102a: Status 404 returned error can't find the container with id 791c1fb139c2bde38dc6ec8b899268a93625a220432e70aaf0f5560c0277102a Jan 23 14:31:24 crc kubenswrapper[4775]: I0123 14:31:24.233907 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l" event={"ID":"b47b9373-0dd5-4635-a8f9-06aa0fc60174","Type":"ContainerStarted","Data":"2079dfd1f90a546b48b0adf5addfe5584632a67d75d8c2a2dfabd83d3cfc9c6f"} Jan 23 14:31:24 crc kubenswrapper[4775]: I0123 14:31:24.234444 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l" event={"ID":"b47b9373-0dd5-4635-a8f9-06aa0fc60174","Type":"ContainerStarted","Data":"791c1fb139c2bde38dc6ec8b899268a93625a220432e70aaf0f5560c0277102a"} Jan 23 14:31:24 crc kubenswrapper[4775]: I0123 14:31:24.258618 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l" podStartSLOduration=2.258566709 podStartE2EDuration="2.258566709s" podCreationTimestamp="2026-01-23 14:31:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:31:24.256873314 +0000 UTC m=+1631.251702134" watchObservedRunningTime="2026-01-23 14:31:24.258566709 +0000 UTC m=+1631.253395469" Jan 23 14:31:28 crc kubenswrapper[4775]: I0123 14:31:28.292244 4775 generic.go:334] "Generic (PLEG): container finished" podID="b47b9373-0dd5-4635-a8f9-06aa0fc60174" containerID="2079dfd1f90a546b48b0adf5addfe5584632a67d75d8c2a2dfabd83d3cfc9c6f" exitCode=0 Jan 23 14:31:28 crc kubenswrapper[4775]: I0123 14:31:28.292713 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l" event={"ID":"b47b9373-0dd5-4635-a8f9-06aa0fc60174","Type":"ContainerDied","Data":"2079dfd1f90a546b48b0adf5addfe5584632a67d75d8c2a2dfabd83d3cfc9c6f"} Jan 23 14:31:29 crc kubenswrapper[4775]: I0123 14:31:29.729599 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l" Jan 23 14:31:29 crc kubenswrapper[4775]: I0123 14:31:29.875354 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b47b9373-0dd5-4635-a8f9-06aa0fc60174-config-data\") pod \"b47b9373-0dd5-4635-a8f9-06aa0fc60174\" (UID: \"b47b9373-0dd5-4635-a8f9-06aa0fc60174\") " Jan 23 14:31:29 crc kubenswrapper[4775]: I0123 14:31:29.875444 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ctxr\" (UniqueName: \"kubernetes.io/projected/b47b9373-0dd5-4635-a8f9-06aa0fc60174-kube-api-access-5ctxr\") pod \"b47b9373-0dd5-4635-a8f9-06aa0fc60174\" (UID: \"b47b9373-0dd5-4635-a8f9-06aa0fc60174\") " Jan 23 14:31:29 crc kubenswrapper[4775]: I0123 14:31:29.875475 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b47b9373-0dd5-4635-a8f9-06aa0fc60174-scripts\") pod \"b47b9373-0dd5-4635-a8f9-06aa0fc60174\" (UID: \"b47b9373-0dd5-4635-a8f9-06aa0fc60174\") " Jan 23 14:31:29 crc kubenswrapper[4775]: I0123 14:31:29.880698 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b47b9373-0dd5-4635-a8f9-06aa0fc60174-kube-api-access-5ctxr" (OuterVolumeSpecName: "kube-api-access-5ctxr") pod "b47b9373-0dd5-4635-a8f9-06aa0fc60174" (UID: "b47b9373-0dd5-4635-a8f9-06aa0fc60174"). InnerVolumeSpecName "kube-api-access-5ctxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:31:29 crc kubenswrapper[4775]: I0123 14:31:29.883106 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b47b9373-0dd5-4635-a8f9-06aa0fc60174-scripts" (OuterVolumeSpecName: "scripts") pod "b47b9373-0dd5-4635-a8f9-06aa0fc60174" (UID: "b47b9373-0dd5-4635-a8f9-06aa0fc60174"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:31:29 crc kubenswrapper[4775]: I0123 14:31:29.919798 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b47b9373-0dd5-4635-a8f9-06aa0fc60174-config-data" (OuterVolumeSpecName: "config-data") pod "b47b9373-0dd5-4635-a8f9-06aa0fc60174" (UID: "b47b9373-0dd5-4635-a8f9-06aa0fc60174"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:31:29 crc kubenswrapper[4775]: I0123 14:31:29.977915 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b47b9373-0dd5-4635-a8f9-06aa0fc60174-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:29 crc kubenswrapper[4775]: I0123 14:31:29.977968 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ctxr\" (UniqueName: \"kubernetes.io/projected/b47b9373-0dd5-4635-a8f9-06aa0fc60174-kube-api-access-5ctxr\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:29 crc kubenswrapper[4775]: I0123 14:31:29.977992 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b47b9373-0dd5-4635-a8f9-06aa0fc60174-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:30 crc kubenswrapper[4775]: I0123 14:31:30.319287 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l" event={"ID":"b47b9373-0dd5-4635-a8f9-06aa0fc60174","Type":"ContainerDied","Data":"791c1fb139c2bde38dc6ec8b899268a93625a220432e70aaf0f5560c0277102a"} Jan 23 14:31:30 crc kubenswrapper[4775]: I0123 14:31:30.319351 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="791c1fb139c2bde38dc6ec8b899268a93625a220432e70aaf0f5560c0277102a" Jan 23 14:31:30 crc kubenswrapper[4775]: I0123 14:31:30.319509 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l" Jan 23 14:31:30 crc kubenswrapper[4775]: I0123 14:31:30.421677 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:31:30 crc kubenswrapper[4775]: E0123 14:31:30.422185 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b47b9373-0dd5-4635-a8f9-06aa0fc60174" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 23 14:31:30 crc kubenswrapper[4775]: I0123 14:31:30.422215 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="b47b9373-0dd5-4635-a8f9-06aa0fc60174" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 23 14:31:30 crc kubenswrapper[4775]: I0123 14:31:30.422485 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="b47b9373-0dd5-4635-a8f9-06aa0fc60174" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 23 14:31:30 crc kubenswrapper[4775]: I0123 14:31:30.423322 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:31:30 crc kubenswrapper[4775]: I0123 14:31:30.430428 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 23 14:31:30 crc kubenswrapper[4775]: I0123 14:31:30.431428 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-nova-kuttl-dockercfg-v6hs6" Jan 23 14:31:30 crc kubenswrapper[4775]: I0123 14:31:30.440017 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:31:30 crc kubenswrapper[4775]: I0123 14:31:30.491117 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m26pl\" (UniqueName: \"kubernetes.io/projected/84473a0d-a6e7-41ab-8b88-07b8ed888950-kube-api-access-m26pl\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"84473a0d-a6e7-41ab-8b88-07b8ed888950\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:31:30 crc kubenswrapper[4775]: I0123 14:31:30.491256 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84473a0d-a6e7-41ab-8b88-07b8ed888950-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"84473a0d-a6e7-41ab-8b88-07b8ed888950\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:31:30 crc kubenswrapper[4775]: I0123 14:31:30.592738 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m26pl\" (UniqueName: \"kubernetes.io/projected/84473a0d-a6e7-41ab-8b88-07b8ed888950-kube-api-access-m26pl\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"84473a0d-a6e7-41ab-8b88-07b8ed888950\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:31:30 crc kubenswrapper[4775]: I0123 14:31:30.592889 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84473a0d-a6e7-41ab-8b88-07b8ed888950-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"84473a0d-a6e7-41ab-8b88-07b8ed888950\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:31:30 crc kubenswrapper[4775]: I0123 14:31:30.600214 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84473a0d-a6e7-41ab-8b88-07b8ed888950-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"84473a0d-a6e7-41ab-8b88-07b8ed888950\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:31:30 crc kubenswrapper[4775]: I0123 14:31:30.629687 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m26pl\" (UniqueName: \"kubernetes.io/projected/84473a0d-a6e7-41ab-8b88-07b8ed888950-kube-api-access-m26pl\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"84473a0d-a6e7-41ab-8b88-07b8ed888950\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:31:30 crc kubenswrapper[4775]: I0123 14:31:30.744700 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:31:31 crc kubenswrapper[4775]: I0123 14:31:31.239934 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:31:31 crc kubenswrapper[4775]: I0123 14:31:31.328919 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"84473a0d-a6e7-41ab-8b88-07b8ed888950","Type":"ContainerStarted","Data":"b44ad7319eff2652d4ad8fadab672eed48adfae26f3c8e4cc8c6eb5f3b5d2bc0"} Jan 23 14:31:32 crc kubenswrapper[4775]: I0123 14:31:32.347783 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"84473a0d-a6e7-41ab-8b88-07b8ed888950","Type":"ContainerStarted","Data":"dc374e41b812f145b9a3d5437aa30440decff971ec9b42763a18a56b3992b678"} Jan 23 14:31:32 crc kubenswrapper[4775]: I0123 14:31:32.349474 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:31:32 crc kubenswrapper[4775]: I0123 14:31:32.383616 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podStartSLOduration=2.383597307 podStartE2EDuration="2.383597307s" podCreationTimestamp="2026-01-23 14:31:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:31:32.378331526 +0000 UTC m=+1639.373160266" watchObservedRunningTime="2026-01-23 14:31:32.383597307 +0000 UTC m=+1639.378426047" Jan 23 14:31:34 crc kubenswrapper[4775]: I0123 14:31:34.714320 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:31:34 crc kubenswrapper[4775]: E0123 14:31:34.715157 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:31:40 crc kubenswrapper[4775]: I0123 14:31:40.788628 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.293046 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf"] Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.294887 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.298161 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-config-data" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.298589 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-scripts" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.310608 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf"] Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.405852 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f751d2a1-4497-4fb2-9c13-af54db584a48-scripts\") pod \"nova-kuttl-cell0-cell-mapping-lnndf\" (UID: \"f751d2a1-4497-4fb2-9c13-af54db584a48\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.406105 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2q2j\" (UniqueName: \"kubernetes.io/projected/f751d2a1-4497-4fb2-9c13-af54db584a48-kube-api-access-z2q2j\") pod \"nova-kuttl-cell0-cell-mapping-lnndf\" (UID: \"f751d2a1-4497-4fb2-9c13-af54db584a48\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.406206 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f751d2a1-4497-4fb2-9c13-af54db584a48-config-data\") pod \"nova-kuttl-cell0-cell-mapping-lnndf\" (UID: \"f751d2a1-4497-4fb2-9c13-af54db584a48\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.507884 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f751d2a1-4497-4fb2-9c13-af54db584a48-scripts\") pod \"nova-kuttl-cell0-cell-mapping-lnndf\" (UID: \"f751d2a1-4497-4fb2-9c13-af54db584a48\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.507967 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2q2j\" (UniqueName: \"kubernetes.io/projected/f751d2a1-4497-4fb2-9c13-af54db584a48-kube-api-access-z2q2j\") pod \"nova-kuttl-cell0-cell-mapping-lnndf\" (UID: \"f751d2a1-4497-4fb2-9c13-af54db584a48\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.508026 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f751d2a1-4497-4fb2-9c13-af54db584a48-config-data\") pod \"nova-kuttl-cell0-cell-mapping-lnndf\" (UID: \"f751d2a1-4497-4fb2-9c13-af54db584a48\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.515079 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f751d2a1-4497-4fb2-9c13-af54db584a48-scripts\") pod \"nova-kuttl-cell0-cell-mapping-lnndf\" (UID: \"f751d2a1-4497-4fb2-9c13-af54db584a48\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.516211 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f751d2a1-4497-4fb2-9c13-af54db584a48-config-data\") pod \"nova-kuttl-cell0-cell-mapping-lnndf\" (UID: \"f751d2a1-4497-4fb2-9c13-af54db584a48\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.546272 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.547668 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.551051 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.551790 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2q2j\" (UniqueName: \"kubernetes.io/projected/f751d2a1-4497-4fb2-9c13-af54db584a48-kube-api-access-z2q2j\") pod \"nova-kuttl-cell0-cell-mapping-lnndf\" (UID: \"f751d2a1-4497-4fb2-9c13-af54db584a48\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.563588 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.601029 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.602438 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.605407 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.614339 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.634986 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.650255 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.651524 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.656441 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.672871 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.689336 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.696417 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.699932 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-novncproxy-config-data" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.710310 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j8cr\" (UniqueName: \"kubernetes.io/projected/7bbfaeee-c2d4-472c-a3da-5e055c5ecf08-kube-api-access-5j8cr\") pod \"nova-kuttl-metadata-0\" (UID: \"7bbfaeee-c2d4-472c-a3da-5e055c5ecf08\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.710411 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bbfaeee-c2d4-472c-a3da-5e055c5ecf08-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"7bbfaeee-c2d4-472c-a3da-5e055c5ecf08\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.710452 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d46934b-df3e-4beb-b74c-0c4c0d568ec4-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"3d46934b-df3e-4beb-b74c-0c4c0d568ec4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.710483 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zdws\" (UniqueName: \"kubernetes.io/projected/3d46934b-df3e-4beb-b74c-0c4c0d568ec4-kube-api-access-7zdws\") pod \"nova-kuttl-scheduler-0\" (UID: \"3d46934b-df3e-4beb-b74c-0c4c0d568ec4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.710504 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bbfaeee-c2d4-472c-a3da-5e055c5ecf08-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"7bbfaeee-c2d4-472c-a3da-5e055c5ecf08\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.731320 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.813620 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a2ad7dd-d80c-4eb4-8531-c2a8208bb760-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"5a2ad7dd-d80c-4eb4-8531-c2a8208bb760\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.813704 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zdws\" (UniqueName: \"kubernetes.io/projected/3d46934b-df3e-4beb-b74c-0c4c0d568ec4-kube-api-access-7zdws\") pod \"nova-kuttl-scheduler-0\" (UID: \"3d46934b-df3e-4beb-b74c-0c4c0d568ec4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.813757 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bbfaeee-c2d4-472c-a3da-5e055c5ecf08-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"7bbfaeee-c2d4-472c-a3da-5e055c5ecf08\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.813788 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j8cr\" (UniqueName: \"kubernetes.io/projected/7bbfaeee-c2d4-472c-a3da-5e055c5ecf08-kube-api-access-5j8cr\") pod \"nova-kuttl-metadata-0\" (UID: \"7bbfaeee-c2d4-472c-a3da-5e055c5ecf08\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.813899 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kpdm\" (UniqueName: \"kubernetes.io/projected/5a2ad7dd-d80c-4eb4-8531-c2a8208bb760-kube-api-access-7kpdm\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"5a2ad7dd-d80c-4eb4-8531-c2a8208bb760\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.813949 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdnd6\" (UniqueName: \"kubernetes.io/projected/08da1273-e72a-44f8-82d2-adf17cee8644-kube-api-access-tdnd6\") pod \"nova-kuttl-api-0\" (UID: \"08da1273-e72a-44f8-82d2-adf17cee8644\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.813965 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08da1273-e72a-44f8-82d2-adf17cee8644-config-data\") pod \"nova-kuttl-api-0\" (UID: \"08da1273-e72a-44f8-82d2-adf17cee8644\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.813980 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08da1273-e72a-44f8-82d2-adf17cee8644-logs\") pod \"nova-kuttl-api-0\" (UID: \"08da1273-e72a-44f8-82d2-adf17cee8644\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.814043 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bbfaeee-c2d4-472c-a3da-5e055c5ecf08-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"7bbfaeee-c2d4-472c-a3da-5e055c5ecf08\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.814081 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d46934b-df3e-4beb-b74c-0c4c0d568ec4-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"3d46934b-df3e-4beb-b74c-0c4c0d568ec4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.814898 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bbfaeee-c2d4-472c-a3da-5e055c5ecf08-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"7bbfaeee-c2d4-472c-a3da-5e055c5ecf08\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.826372 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d46934b-df3e-4beb-b74c-0c4c0d568ec4-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"3d46934b-df3e-4beb-b74c-0c4c0d568ec4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.826531 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bbfaeee-c2d4-472c-a3da-5e055c5ecf08-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"7bbfaeee-c2d4-472c-a3da-5e055c5ecf08\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.830087 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j8cr\" (UniqueName: \"kubernetes.io/projected/7bbfaeee-c2d4-472c-a3da-5e055c5ecf08-kube-api-access-5j8cr\") pod \"nova-kuttl-metadata-0\" (UID: \"7bbfaeee-c2d4-472c-a3da-5e055c5ecf08\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.830167 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zdws\" (UniqueName: \"kubernetes.io/projected/3d46934b-df3e-4beb-b74c-0c4c0d568ec4-kube-api-access-7zdws\") pod \"nova-kuttl-scheduler-0\" (UID: \"3d46934b-df3e-4beb-b74c-0c4c0d568ec4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.915539 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08da1273-e72a-44f8-82d2-adf17cee8644-config-data\") pod \"nova-kuttl-api-0\" (UID: \"08da1273-e72a-44f8-82d2-adf17cee8644\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.915580 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdnd6\" (UniqueName: \"kubernetes.io/projected/08da1273-e72a-44f8-82d2-adf17cee8644-kube-api-access-tdnd6\") pod \"nova-kuttl-api-0\" (UID: \"08da1273-e72a-44f8-82d2-adf17cee8644\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.915599 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08da1273-e72a-44f8-82d2-adf17cee8644-logs\") pod \"nova-kuttl-api-0\" (UID: \"08da1273-e72a-44f8-82d2-adf17cee8644\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.915649 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a2ad7dd-d80c-4eb4-8531-c2a8208bb760-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"5a2ad7dd-d80c-4eb4-8531-c2a8208bb760\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.915737 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kpdm\" (UniqueName: \"kubernetes.io/projected/5a2ad7dd-d80c-4eb4-8531-c2a8208bb760-kube-api-access-7kpdm\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"5a2ad7dd-d80c-4eb4-8531-c2a8208bb760\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.916541 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08da1273-e72a-44f8-82d2-adf17cee8644-logs\") pod \"nova-kuttl-api-0\" (UID: \"08da1273-e72a-44f8-82d2-adf17cee8644\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.918996 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08da1273-e72a-44f8-82d2-adf17cee8644-config-data\") pod \"nova-kuttl-api-0\" (UID: \"08da1273-e72a-44f8-82d2-adf17cee8644\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.919755 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a2ad7dd-d80c-4eb4-8531-c2a8208bb760-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"5a2ad7dd-d80c-4eb4-8531-c2a8208bb760\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.930591 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdnd6\" (UniqueName: \"kubernetes.io/projected/08da1273-e72a-44f8-82d2-adf17cee8644-kube-api-access-tdnd6\") pod \"nova-kuttl-api-0\" (UID: \"08da1273-e72a-44f8-82d2-adf17cee8644\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.931492 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kpdm\" (UniqueName: \"kubernetes.io/projected/5a2ad7dd-d80c-4eb4-8531-c2a8208bb760-kube-api-access-7kpdm\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"5a2ad7dd-d80c-4eb4-8531-c2a8208bb760\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:31:41 crc kubenswrapper[4775]: I0123 14:31:41.967121 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.035267 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.042449 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.056506 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.095966 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf"] Jan 23 14:31:42 crc kubenswrapper[4775]: W0123 14:31:42.136952 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf751d2a1_4497_4fb2_9c13_af54db584a48.slice/crio-f4d3648db22ffaa2982189751dbd63d9f0d1f5aeb1792dd9802861788bfc90c1 WatchSource:0}: Error finding container f4d3648db22ffaa2982189751dbd63d9f0d1f5aeb1792dd9802861788bfc90c1: Status 404 returned error can't find the container with id f4d3648db22ffaa2982189751dbd63d9f0d1f5aeb1792dd9802861788bfc90c1 Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.193248 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5"] Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.194431 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5" Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.196627 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.197159 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-scripts" Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.211420 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5"] Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.323277 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnnd8\" (UniqueName: \"kubernetes.io/projected/c4701d5c-309d-4969-852b-83626330e0df-kube-api-access-hnnd8\") pod \"nova-kuttl-cell1-conductor-db-sync-sq2k5\" (UID: \"c4701d5c-309d-4969-852b-83626330e0df\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5" Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.323585 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4701d5c-309d-4969-852b-83626330e0df-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-sq2k5\" (UID: \"c4701d5c-309d-4969-852b-83626330e0df\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5" Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.323632 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4701d5c-309d-4969-852b-83626330e0df-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-sq2k5\" (UID: \"c4701d5c-309d-4969-852b-83626330e0df\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5" Jan 23 14:31:42 crc kubenswrapper[4775]: W0123 14:31:42.408355 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3d46934b_df3e_4beb_b74c_0c4c0d568ec4.slice/crio-b0a7899d7e01d16f0552c389419e173609b2f257ffd2f8c9231f3ed21a6bb023 WatchSource:0}: Error finding container b0a7899d7e01d16f0552c389419e173609b2f257ffd2f8c9231f3ed21a6bb023: Status 404 returned error can't find the container with id b0a7899d7e01d16f0552c389419e173609b2f257ffd2f8c9231f3ed21a6bb023 Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.409718 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.425357 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnnd8\" (UniqueName: \"kubernetes.io/projected/c4701d5c-309d-4969-852b-83626330e0df-kube-api-access-hnnd8\") pod \"nova-kuttl-cell1-conductor-db-sync-sq2k5\" (UID: \"c4701d5c-309d-4969-852b-83626330e0df\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5" Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.425441 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4701d5c-309d-4969-852b-83626330e0df-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-sq2k5\" (UID: \"c4701d5c-309d-4969-852b-83626330e0df\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5" Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.425475 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4701d5c-309d-4969-852b-83626330e0df-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-sq2k5\" (UID: \"c4701d5c-309d-4969-852b-83626330e0df\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5" Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.429281 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4701d5c-309d-4969-852b-83626330e0df-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-sq2k5\" (UID: \"c4701d5c-309d-4969-852b-83626330e0df\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5" Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.434335 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4701d5c-309d-4969-852b-83626330e0df-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-sq2k5\" (UID: \"c4701d5c-309d-4969-852b-83626330e0df\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5" Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.447006 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnnd8\" (UniqueName: \"kubernetes.io/projected/c4701d5c-309d-4969-852b-83626330e0df-kube-api-access-hnnd8\") pod \"nova-kuttl-cell1-conductor-db-sync-sq2k5\" (UID: \"c4701d5c-309d-4969-852b-83626330e0df\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5" Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.468215 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"3d46934b-df3e-4beb-b74c-0c4c0d568ec4","Type":"ContainerStarted","Data":"b0a7899d7e01d16f0552c389419e173609b2f257ffd2f8c9231f3ed21a6bb023"} Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.469632 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf" event={"ID":"f751d2a1-4497-4fb2-9c13-af54db584a48","Type":"ContainerStarted","Data":"f3d6d9e6a7043cb32f7f7ac11281394b9efc64f38742f080cf771797930a3cc3"} Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.469656 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf" event={"ID":"f751d2a1-4497-4fb2-9c13-af54db584a48","Type":"ContainerStarted","Data":"f4d3648db22ffaa2982189751dbd63d9f0d1f5aeb1792dd9802861788bfc90c1"} Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.484279 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf" podStartSLOduration=1.48426462 podStartE2EDuration="1.48426462s" podCreationTimestamp="2026-01-23 14:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:31:42.481320331 +0000 UTC m=+1649.476149061" watchObservedRunningTime="2026-01-23 14:31:42.48426462 +0000 UTC m=+1649.479093360" Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.519173 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5" Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.542192 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:31:42 crc kubenswrapper[4775]: W0123 14:31:42.558992 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bbfaeee_c2d4_472c_a3da_5e055c5ecf08.slice/crio-4bb99c419617c5d3350f7f18a33de693e261be8f8db3347a5092cc5ab5db2fb2 WatchSource:0}: Error finding container 4bb99c419617c5d3350f7f18a33de693e261be8f8db3347a5092cc5ab5db2fb2: Status 404 returned error can't find the container with id 4bb99c419617c5d3350f7f18a33de693e261be8f8db3347a5092cc5ab5db2fb2 Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.625943 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 23 14:31:42 crc kubenswrapper[4775]: W0123 14:31:42.634893 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a2ad7dd_d80c_4eb4_8531_c2a8208bb760.slice/crio-73c77f39c3e21579fd11ef895bb7a7f0e8b32a22edb065c50cab5df5c5dc9b81 WatchSource:0}: Error finding container 73c77f39c3e21579fd11ef895bb7a7f0e8b32a22edb065c50cab5df5c5dc9b81: Status 404 returned error can't find the container with id 73c77f39c3e21579fd11ef895bb7a7f0e8b32a22edb065c50cab5df5c5dc9b81 Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.709857 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:31:42 crc kubenswrapper[4775]: I0123 14:31:42.973916 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5"] Jan 23 14:31:42 crc kubenswrapper[4775]: W0123 14:31:42.981287 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4701d5c_309d_4969_852b_83626330e0df.slice/crio-6b9cb2fac108dba678d9ff9b704cba3010ac0cea440cea4de7bc23cec83336ae WatchSource:0}: Error finding container 6b9cb2fac108dba678d9ff9b704cba3010ac0cea440cea4de7bc23cec83336ae: Status 404 returned error can't find the container with id 6b9cb2fac108dba678d9ff9b704cba3010ac0cea440cea4de7bc23cec83336ae Jan 23 14:31:43 crc kubenswrapper[4775]: I0123 14:31:43.485870 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"3d46934b-df3e-4beb-b74c-0c4c0d568ec4","Type":"ContainerStarted","Data":"e4688d8f9959793b3c09c75ee759bf5f6942cfd383400a35a6a02f55e85b0d1d"} Jan 23 14:31:43 crc kubenswrapper[4775]: I0123 14:31:43.492684 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5" event={"ID":"c4701d5c-309d-4969-852b-83626330e0df","Type":"ContainerStarted","Data":"827309d081a52f2f4fbdc446573f9dbf6756c3faef728c7a3ede91f774184851"} Jan 23 14:31:43 crc kubenswrapper[4775]: I0123 14:31:43.492732 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5" event={"ID":"c4701d5c-309d-4969-852b-83626330e0df","Type":"ContainerStarted","Data":"6b9cb2fac108dba678d9ff9b704cba3010ac0cea440cea4de7bc23cec83336ae"} Jan 23 14:31:43 crc kubenswrapper[4775]: I0123 14:31:43.497834 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"7bbfaeee-c2d4-472c-a3da-5e055c5ecf08","Type":"ContainerStarted","Data":"511a0675712bb53fb440f8e86c2c3486d8344c814fbbf7adcac683ba919802df"} Jan 23 14:31:43 crc kubenswrapper[4775]: I0123 14:31:43.497923 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"7bbfaeee-c2d4-472c-a3da-5e055c5ecf08","Type":"ContainerStarted","Data":"bcd640f910212325f3c292b1c939f69d1c85e3171183fd1de93071b9ac6fadd7"} Jan 23 14:31:43 crc kubenswrapper[4775]: I0123 14:31:43.497950 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"7bbfaeee-c2d4-472c-a3da-5e055c5ecf08","Type":"ContainerStarted","Data":"4bb99c419617c5d3350f7f18a33de693e261be8f8db3347a5092cc5ab5db2fb2"} Jan 23 14:31:43 crc kubenswrapper[4775]: I0123 14:31:43.502862 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"08da1273-e72a-44f8-82d2-adf17cee8644","Type":"ContainerStarted","Data":"e3af09b05a7fa2d7437b858310eba45e89c2c249e93473c764c06bac8a889275"} Jan 23 14:31:43 crc kubenswrapper[4775]: I0123 14:31:43.502917 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"08da1273-e72a-44f8-82d2-adf17cee8644","Type":"ContainerStarted","Data":"9382fe6d1138e55ab14facf039c91921a6ba0d71abf83b0486c2ac47ff0a1d23"} Jan 23 14:31:43 crc kubenswrapper[4775]: I0123 14:31:43.502931 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"08da1273-e72a-44f8-82d2-adf17cee8644","Type":"ContainerStarted","Data":"4a5f991b7499aef449c7bccc5f57357c23ade00d8e943dce54d385ab79061ebd"} Jan 23 14:31:43 crc kubenswrapper[4775]: I0123 14:31:43.509911 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"5a2ad7dd-d80c-4eb4-8531-c2a8208bb760","Type":"ContainerStarted","Data":"b3037b72f855e3514727ac579826433af99bcec07db67273c699c91b0c386a1b"} Jan 23 14:31:43 crc kubenswrapper[4775]: I0123 14:31:43.509955 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"5a2ad7dd-d80c-4eb4-8531-c2a8208bb760","Type":"ContainerStarted","Data":"73c77f39c3e21579fd11ef895bb7a7f0e8b32a22edb065c50cab5df5c5dc9b81"} Jan 23 14:31:43 crc kubenswrapper[4775]: I0123 14:31:43.537611 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=2.537584487 podStartE2EDuration="2.537584487s" podCreationTimestamp="2026-01-23 14:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:31:43.499887739 +0000 UTC m=+1650.494716479" watchObservedRunningTime="2026-01-23 14:31:43.537584487 +0000 UTC m=+1650.532413227" Jan 23 14:31:43 crc kubenswrapper[4775]: I0123 14:31:43.549175 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=2.549154706 podStartE2EDuration="2.549154706s" podCreationTimestamp="2026-01-23 14:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:31:43.537567966 +0000 UTC m=+1650.532396716" watchObservedRunningTime="2026-01-23 14:31:43.549154706 +0000 UTC m=+1650.543983456" Jan 23 14:31:43 crc kubenswrapper[4775]: I0123 14:31:43.562837 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5" podStartSLOduration=1.562820211 podStartE2EDuration="1.562820211s" podCreationTimestamp="2026-01-23 14:31:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:31:43.562334008 +0000 UTC m=+1650.557162768" watchObservedRunningTime="2026-01-23 14:31:43.562820211 +0000 UTC m=+1650.557648951" Jan 23 14:31:43 crc kubenswrapper[4775]: I0123 14:31:43.602696 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.6026742670000003 podStartE2EDuration="2.602674267s" podCreationTimestamp="2026-01-23 14:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:31:43.587589854 +0000 UTC m=+1650.582418634" watchObservedRunningTime="2026-01-23 14:31:43.602674267 +0000 UTC m=+1650.597503017" Jan 23 14:31:43 crc kubenswrapper[4775]: I0123 14:31:43.606633 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podStartSLOduration=2.606615442 podStartE2EDuration="2.606615442s" podCreationTimestamp="2026-01-23 14:31:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:31:43.605259746 +0000 UTC m=+1650.600088486" watchObservedRunningTime="2026-01-23 14:31:43.606615442 +0000 UTC m=+1650.601444182" Jan 23 14:31:45 crc kubenswrapper[4775]: I0123 14:31:45.528575 4775 generic.go:334] "Generic (PLEG): container finished" podID="c4701d5c-309d-4969-852b-83626330e0df" containerID="827309d081a52f2f4fbdc446573f9dbf6756c3faef728c7a3ede91f774184851" exitCode=0 Jan 23 14:31:45 crc kubenswrapper[4775]: I0123 14:31:45.528671 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5" event={"ID":"c4701d5c-309d-4969-852b-83626330e0df","Type":"ContainerDied","Data":"827309d081a52f2f4fbdc446573f9dbf6756c3faef728c7a3ede91f774184851"} Jan 23 14:31:46 crc kubenswrapper[4775]: I0123 14:31:46.967509 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:31:46 crc kubenswrapper[4775]: I0123 14:31:46.967896 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5" Jan 23 14:31:47 crc kubenswrapper[4775]: I0123 14:31:47.035906 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:47 crc kubenswrapper[4775]: I0123 14:31:47.036411 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:47 crc kubenswrapper[4775]: I0123 14:31:47.057085 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:31:47 crc kubenswrapper[4775]: I0123 14:31:47.106158 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4701d5c-309d-4969-852b-83626330e0df-scripts\") pod \"c4701d5c-309d-4969-852b-83626330e0df\" (UID: \"c4701d5c-309d-4969-852b-83626330e0df\") " Jan 23 14:31:47 crc kubenswrapper[4775]: I0123 14:31:47.106327 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnnd8\" (UniqueName: \"kubernetes.io/projected/c4701d5c-309d-4969-852b-83626330e0df-kube-api-access-hnnd8\") pod \"c4701d5c-309d-4969-852b-83626330e0df\" (UID: \"c4701d5c-309d-4969-852b-83626330e0df\") " Jan 23 14:31:47 crc kubenswrapper[4775]: I0123 14:31:47.106378 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4701d5c-309d-4969-852b-83626330e0df-config-data\") pod \"c4701d5c-309d-4969-852b-83626330e0df\" (UID: \"c4701d5c-309d-4969-852b-83626330e0df\") " Jan 23 14:31:47 crc kubenswrapper[4775]: I0123 14:31:47.112367 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4701d5c-309d-4969-852b-83626330e0df-kube-api-access-hnnd8" (OuterVolumeSpecName: "kube-api-access-hnnd8") pod "c4701d5c-309d-4969-852b-83626330e0df" (UID: "c4701d5c-309d-4969-852b-83626330e0df"). InnerVolumeSpecName "kube-api-access-hnnd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:31:47 crc kubenswrapper[4775]: I0123 14:31:47.112742 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4701d5c-309d-4969-852b-83626330e0df-scripts" (OuterVolumeSpecName: "scripts") pod "c4701d5c-309d-4969-852b-83626330e0df" (UID: "c4701d5c-309d-4969-852b-83626330e0df"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:31:47 crc kubenswrapper[4775]: I0123 14:31:47.138476 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4701d5c-309d-4969-852b-83626330e0df-config-data" (OuterVolumeSpecName: "config-data") pod "c4701d5c-309d-4969-852b-83626330e0df" (UID: "c4701d5c-309d-4969-852b-83626330e0df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:31:47 crc kubenswrapper[4775]: I0123 14:31:47.209126 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c4701d5c-309d-4969-852b-83626330e0df-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:47 crc kubenswrapper[4775]: I0123 14:31:47.209182 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnnd8\" (UniqueName: \"kubernetes.io/projected/c4701d5c-309d-4969-852b-83626330e0df-kube-api-access-hnnd8\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:47 crc kubenswrapper[4775]: I0123 14:31:47.209204 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4701d5c-309d-4969-852b-83626330e0df-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:47 crc kubenswrapper[4775]: I0123 14:31:47.553751 4775 generic.go:334] "Generic (PLEG): container finished" podID="f751d2a1-4497-4fb2-9c13-af54db584a48" containerID="f3d6d9e6a7043cb32f7f7ac11281394b9efc64f38742f080cf771797930a3cc3" exitCode=0 Jan 23 14:31:47 crc kubenswrapper[4775]: I0123 14:31:47.553897 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf" event={"ID":"f751d2a1-4497-4fb2-9c13-af54db584a48","Type":"ContainerDied","Data":"f3d6d9e6a7043cb32f7f7ac11281394b9efc64f38742f080cf771797930a3cc3"} Jan 23 14:31:47 crc kubenswrapper[4775]: I0123 14:31:47.555917 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5" event={"ID":"c4701d5c-309d-4969-852b-83626330e0df","Type":"ContainerDied","Data":"6b9cb2fac108dba678d9ff9b704cba3010ac0cea440cea4de7bc23cec83336ae"} Jan 23 14:31:47 crc kubenswrapper[4775]: I0123 14:31:47.555987 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b9cb2fac108dba678d9ff9b704cba3010ac0cea440cea4de7bc23cec83336ae" Jan 23 14:31:47 crc kubenswrapper[4775]: I0123 14:31:47.556161 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5" Jan 23 14:31:48 crc kubenswrapper[4775]: I0123 14:31:48.099707 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:31:48 crc kubenswrapper[4775]: E0123 14:31:48.100364 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4701d5c-309d-4969-852b-83626330e0df" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 23 14:31:48 crc kubenswrapper[4775]: I0123 14:31:48.100388 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4701d5c-309d-4969-852b-83626330e0df" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 23 14:31:48 crc kubenswrapper[4775]: I0123 14:31:48.100774 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4701d5c-309d-4969-852b-83626330e0df" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 23 14:31:48 crc kubenswrapper[4775]: I0123 14:31:48.101684 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:31:48 crc kubenswrapper[4775]: I0123 14:31:48.104452 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 23 14:31:48 crc kubenswrapper[4775]: I0123 14:31:48.112542 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:31:48 crc kubenswrapper[4775]: I0123 14:31:48.228383 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e279d5d-df37-483b-9bc7-682b48b2dbc4-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"4e279d5d-df37-483b-9bc7-682b48b2dbc4\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:31:48 crc kubenswrapper[4775]: I0123 14:31:48.228612 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7tvz\" (UniqueName: \"kubernetes.io/projected/4e279d5d-df37-483b-9bc7-682b48b2dbc4-kube-api-access-c7tvz\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"4e279d5d-df37-483b-9bc7-682b48b2dbc4\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:31:48 crc kubenswrapper[4775]: I0123 14:31:48.330947 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7tvz\" (UniqueName: \"kubernetes.io/projected/4e279d5d-df37-483b-9bc7-682b48b2dbc4-kube-api-access-c7tvz\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"4e279d5d-df37-483b-9bc7-682b48b2dbc4\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:31:48 crc kubenswrapper[4775]: I0123 14:31:48.331186 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e279d5d-df37-483b-9bc7-682b48b2dbc4-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"4e279d5d-df37-483b-9bc7-682b48b2dbc4\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:31:48 crc kubenswrapper[4775]: I0123 14:31:48.350180 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e279d5d-df37-483b-9bc7-682b48b2dbc4-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"4e279d5d-df37-483b-9bc7-682b48b2dbc4\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:31:48 crc kubenswrapper[4775]: I0123 14:31:48.365131 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7tvz\" (UniqueName: \"kubernetes.io/projected/4e279d5d-df37-483b-9bc7-682b48b2dbc4-kube-api-access-c7tvz\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"4e279d5d-df37-483b-9bc7-682b48b2dbc4\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:31:48 crc kubenswrapper[4775]: I0123 14:31:48.435213 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:31:48 crc kubenswrapper[4775]: I0123 14:31:48.882007 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf" Jan 23 14:31:48 crc kubenswrapper[4775]: I0123 14:31:48.903135 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.041333 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f751d2a1-4497-4fb2-9c13-af54db584a48-config-data\") pod \"f751d2a1-4497-4fb2-9c13-af54db584a48\" (UID: \"f751d2a1-4497-4fb2-9c13-af54db584a48\") " Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.041477 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2q2j\" (UniqueName: \"kubernetes.io/projected/f751d2a1-4497-4fb2-9c13-af54db584a48-kube-api-access-z2q2j\") pod \"f751d2a1-4497-4fb2-9c13-af54db584a48\" (UID: \"f751d2a1-4497-4fb2-9c13-af54db584a48\") " Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.041520 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f751d2a1-4497-4fb2-9c13-af54db584a48-scripts\") pod \"f751d2a1-4497-4fb2-9c13-af54db584a48\" (UID: \"f751d2a1-4497-4fb2-9c13-af54db584a48\") " Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.044598 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f751d2a1-4497-4fb2-9c13-af54db584a48-scripts" (OuterVolumeSpecName: "scripts") pod "f751d2a1-4497-4fb2-9c13-af54db584a48" (UID: "f751d2a1-4497-4fb2-9c13-af54db584a48"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.045714 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f751d2a1-4497-4fb2-9c13-af54db584a48-kube-api-access-z2q2j" (OuterVolumeSpecName: "kube-api-access-z2q2j") pod "f751d2a1-4497-4fb2-9c13-af54db584a48" (UID: "f751d2a1-4497-4fb2-9c13-af54db584a48"). InnerVolumeSpecName "kube-api-access-z2q2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.077177 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f751d2a1-4497-4fb2-9c13-af54db584a48-config-data" (OuterVolumeSpecName: "config-data") pod "f751d2a1-4497-4fb2-9c13-af54db584a48" (UID: "f751d2a1-4497-4fb2-9c13-af54db584a48"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.143604 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f751d2a1-4497-4fb2-9c13-af54db584a48-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.143666 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2q2j\" (UniqueName: \"kubernetes.io/projected/f751d2a1-4497-4fb2-9c13-af54db584a48-kube-api-access-z2q2j\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.143687 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f751d2a1-4497-4fb2-9c13-af54db584a48-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.589075 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf" event={"ID":"f751d2a1-4497-4fb2-9c13-af54db584a48","Type":"ContainerDied","Data":"f4d3648db22ffaa2982189751dbd63d9f0d1f5aeb1792dd9802861788bfc90c1"} Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.589179 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4d3648db22ffaa2982189751dbd63d9f0d1f5aeb1792dd9802861788bfc90c1" Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.589093 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf" Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.591508 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"4e279d5d-df37-483b-9bc7-682b48b2dbc4","Type":"ContainerStarted","Data":"e4096d3b7888413c8e0420a378fc8bb781cb9864846833a4e649d155b711ef1a"} Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.591554 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"4e279d5d-df37-483b-9bc7-682b48b2dbc4","Type":"ContainerStarted","Data":"004f895311337c942728dd641397c9a9477c224ca4d5348fe186974622dce3f9"} Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.594970 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.632077 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podStartSLOduration=1.632049224 podStartE2EDuration="1.632049224s" podCreationTimestamp="2026-01-23 14:31:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:31:49.623576788 +0000 UTC m=+1656.618405528" watchObservedRunningTime="2026-01-23 14:31:49.632049224 +0000 UTC m=+1656.626878004" Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.714261 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:31:49 crc kubenswrapper[4775]: E0123 14:31:49.714876 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.788066 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.788245 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="08da1273-e72a-44f8-82d2-adf17cee8644" containerName="nova-kuttl-api-log" containerID="cri-o://9382fe6d1138e55ab14facf039c91921a6ba0d71abf83b0486c2ac47ff0a1d23" gracePeriod=30 Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.788370 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="08da1273-e72a-44f8-82d2-adf17cee8644" containerName="nova-kuttl-api-api" containerID="cri-o://e3af09b05a7fa2d7437b858310eba45e89c2c249e93473c764c06bac8a889275" gracePeriod=30 Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.834539 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.834756 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="3d46934b-df3e-4beb-b74c-0c4c0d568ec4" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://e4688d8f9959793b3c09c75ee759bf5f6942cfd383400a35a6a02f55e85b0d1d" gracePeriod=30 Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.936145 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.936392 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="7bbfaeee-c2d4-472c-a3da-5e055c5ecf08" containerName="nova-kuttl-metadata-log" containerID="cri-o://bcd640f910212325f3c292b1c939f69d1c85e3171183fd1de93071b9ac6fadd7" gracePeriod=30 Jan 23 14:31:49 crc kubenswrapper[4775]: I0123 14:31:49.936948 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="7bbfaeee-c2d4-472c-a3da-5e055c5ecf08" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://511a0675712bb53fb440f8e86c2c3486d8344c814fbbf7adcac683ba919802df" gracePeriod=30 Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.292143 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.365979 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdnd6\" (UniqueName: \"kubernetes.io/projected/08da1273-e72a-44f8-82d2-adf17cee8644-kube-api-access-tdnd6\") pod \"08da1273-e72a-44f8-82d2-adf17cee8644\" (UID: \"08da1273-e72a-44f8-82d2-adf17cee8644\") " Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.366080 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08da1273-e72a-44f8-82d2-adf17cee8644-config-data\") pod \"08da1273-e72a-44f8-82d2-adf17cee8644\" (UID: \"08da1273-e72a-44f8-82d2-adf17cee8644\") " Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.366144 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08da1273-e72a-44f8-82d2-adf17cee8644-logs\") pod \"08da1273-e72a-44f8-82d2-adf17cee8644\" (UID: \"08da1273-e72a-44f8-82d2-adf17cee8644\") " Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.366904 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08da1273-e72a-44f8-82d2-adf17cee8644-logs" (OuterVolumeSpecName: "logs") pod "08da1273-e72a-44f8-82d2-adf17cee8644" (UID: "08da1273-e72a-44f8-82d2-adf17cee8644"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.386713 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08da1273-e72a-44f8-82d2-adf17cee8644-kube-api-access-tdnd6" (OuterVolumeSpecName: "kube-api-access-tdnd6") pod "08da1273-e72a-44f8-82d2-adf17cee8644" (UID: "08da1273-e72a-44f8-82d2-adf17cee8644"). InnerVolumeSpecName "kube-api-access-tdnd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.389754 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08da1273-e72a-44f8-82d2-adf17cee8644-config-data" (OuterVolumeSpecName: "config-data") pod "08da1273-e72a-44f8-82d2-adf17cee8644" (UID: "08da1273-e72a-44f8-82d2-adf17cee8644"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.469885 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdnd6\" (UniqueName: \"kubernetes.io/projected/08da1273-e72a-44f8-82d2-adf17cee8644-kube-api-access-tdnd6\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.469926 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08da1273-e72a-44f8-82d2-adf17cee8644-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.469939 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08da1273-e72a-44f8-82d2-adf17cee8644-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.535922 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.611430 4775 generic.go:334] "Generic (PLEG): container finished" podID="08da1273-e72a-44f8-82d2-adf17cee8644" containerID="e3af09b05a7fa2d7437b858310eba45e89c2c249e93473c764c06bac8a889275" exitCode=0 Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.611473 4775 generic.go:334] "Generic (PLEG): container finished" podID="08da1273-e72a-44f8-82d2-adf17cee8644" containerID="9382fe6d1138e55ab14facf039c91921a6ba0d71abf83b0486c2ac47ff0a1d23" exitCode=143 Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.611488 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"08da1273-e72a-44f8-82d2-adf17cee8644","Type":"ContainerDied","Data":"e3af09b05a7fa2d7437b858310eba45e89c2c249e93473c764c06bac8a889275"} Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.611548 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"08da1273-e72a-44f8-82d2-adf17cee8644","Type":"ContainerDied","Data":"9382fe6d1138e55ab14facf039c91921a6ba0d71abf83b0486c2ac47ff0a1d23"} Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.611563 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"08da1273-e72a-44f8-82d2-adf17cee8644","Type":"ContainerDied","Data":"4a5f991b7499aef449c7bccc5f57357c23ade00d8e943dce54d385ab79061ebd"} Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.611584 4775 scope.go:117] "RemoveContainer" containerID="e3af09b05a7fa2d7437b858310eba45e89c2c249e93473c764c06bac8a889275" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.611587 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.616062 4775 generic.go:334] "Generic (PLEG): container finished" podID="7bbfaeee-c2d4-472c-a3da-5e055c5ecf08" containerID="511a0675712bb53fb440f8e86c2c3486d8344c814fbbf7adcac683ba919802df" exitCode=0 Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.616095 4775 generic.go:334] "Generic (PLEG): container finished" podID="7bbfaeee-c2d4-472c-a3da-5e055c5ecf08" containerID="bcd640f910212325f3c292b1c939f69d1c85e3171183fd1de93071b9ac6fadd7" exitCode=143 Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.616139 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"7bbfaeee-c2d4-472c-a3da-5e055c5ecf08","Type":"ContainerDied","Data":"511a0675712bb53fb440f8e86c2c3486d8344c814fbbf7adcac683ba919802df"} Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.616176 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.616190 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"7bbfaeee-c2d4-472c-a3da-5e055c5ecf08","Type":"ContainerDied","Data":"bcd640f910212325f3c292b1c939f69d1c85e3171183fd1de93071b9ac6fadd7"} Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.616210 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"7bbfaeee-c2d4-472c-a3da-5e055c5ecf08","Type":"ContainerDied","Data":"4bb99c419617c5d3350f7f18a33de693e261be8f8db3347a5092cc5ab5db2fb2"} Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.659117 4775 scope.go:117] "RemoveContainer" containerID="9382fe6d1138e55ab14facf039c91921a6ba0d71abf83b0486c2ac47ff0a1d23" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.668961 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.677194 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bbfaeee-c2d4-472c-a3da-5e055c5ecf08-logs\") pod \"7bbfaeee-c2d4-472c-a3da-5e055c5ecf08\" (UID: \"7bbfaeee-c2d4-472c-a3da-5e055c5ecf08\") " Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.677445 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5j8cr\" (UniqueName: \"kubernetes.io/projected/7bbfaeee-c2d4-472c-a3da-5e055c5ecf08-kube-api-access-5j8cr\") pod \"7bbfaeee-c2d4-472c-a3da-5e055c5ecf08\" (UID: \"7bbfaeee-c2d4-472c-a3da-5e055c5ecf08\") " Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.677583 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bbfaeee-c2d4-472c-a3da-5e055c5ecf08-config-data\") pod \"7bbfaeee-c2d4-472c-a3da-5e055c5ecf08\" (UID: \"7bbfaeee-c2d4-472c-a3da-5e055c5ecf08\") " Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.679666 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bbfaeee-c2d4-472c-a3da-5e055c5ecf08-logs" (OuterVolumeSpecName: "logs") pod "7bbfaeee-c2d4-472c-a3da-5e055c5ecf08" (UID: "7bbfaeee-c2d4-472c-a3da-5e055c5ecf08"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.691871 4775 scope.go:117] "RemoveContainer" containerID="e3af09b05a7fa2d7437b858310eba45e89c2c249e93473c764c06bac8a889275" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.692113 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bbfaeee-c2d4-472c-a3da-5e055c5ecf08-kube-api-access-5j8cr" (OuterVolumeSpecName: "kube-api-access-5j8cr") pod "7bbfaeee-c2d4-472c-a3da-5e055c5ecf08" (UID: "7bbfaeee-c2d4-472c-a3da-5e055c5ecf08"). InnerVolumeSpecName "kube-api-access-5j8cr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:31:50 crc kubenswrapper[4775]: E0123 14:31:50.694500 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3af09b05a7fa2d7437b858310eba45e89c2c249e93473c764c06bac8a889275\": container with ID starting with e3af09b05a7fa2d7437b858310eba45e89c2c249e93473c764c06bac8a889275 not found: ID does not exist" containerID="e3af09b05a7fa2d7437b858310eba45e89c2c249e93473c764c06bac8a889275" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.694653 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3af09b05a7fa2d7437b858310eba45e89c2c249e93473c764c06bac8a889275"} err="failed to get container status \"e3af09b05a7fa2d7437b858310eba45e89c2c249e93473c764c06bac8a889275\": rpc error: code = NotFound desc = could not find container \"e3af09b05a7fa2d7437b858310eba45e89c2c249e93473c764c06bac8a889275\": container with ID starting with e3af09b05a7fa2d7437b858310eba45e89c2c249e93473c764c06bac8a889275 not found: ID does not exist" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.694762 4775 scope.go:117] "RemoveContainer" containerID="9382fe6d1138e55ab14facf039c91921a6ba0d71abf83b0486c2ac47ff0a1d23" Jan 23 14:31:50 crc kubenswrapper[4775]: E0123 14:31:50.695304 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9382fe6d1138e55ab14facf039c91921a6ba0d71abf83b0486c2ac47ff0a1d23\": container with ID starting with 9382fe6d1138e55ab14facf039c91921a6ba0d71abf83b0486c2ac47ff0a1d23 not found: ID does not exist" containerID="9382fe6d1138e55ab14facf039c91921a6ba0d71abf83b0486c2ac47ff0a1d23" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.695404 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9382fe6d1138e55ab14facf039c91921a6ba0d71abf83b0486c2ac47ff0a1d23"} err="failed to get container status \"9382fe6d1138e55ab14facf039c91921a6ba0d71abf83b0486c2ac47ff0a1d23\": rpc error: code = NotFound desc = could not find container \"9382fe6d1138e55ab14facf039c91921a6ba0d71abf83b0486c2ac47ff0a1d23\": container with ID starting with 9382fe6d1138e55ab14facf039c91921a6ba0d71abf83b0486c2ac47ff0a1d23 not found: ID does not exist" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.695446 4775 scope.go:117] "RemoveContainer" containerID="e3af09b05a7fa2d7437b858310eba45e89c2c249e93473c764c06bac8a889275" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.696266 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3af09b05a7fa2d7437b858310eba45e89c2c249e93473c764c06bac8a889275"} err="failed to get container status \"e3af09b05a7fa2d7437b858310eba45e89c2c249e93473c764c06bac8a889275\": rpc error: code = NotFound desc = could not find container \"e3af09b05a7fa2d7437b858310eba45e89c2c249e93473c764c06bac8a889275\": container with ID starting with e3af09b05a7fa2d7437b858310eba45e89c2c249e93473c764c06bac8a889275 not found: ID does not exist" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.696310 4775 scope.go:117] "RemoveContainer" containerID="9382fe6d1138e55ab14facf039c91921a6ba0d71abf83b0486c2ac47ff0a1d23" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.697008 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9382fe6d1138e55ab14facf039c91921a6ba0d71abf83b0486c2ac47ff0a1d23"} err="failed to get container status \"9382fe6d1138e55ab14facf039c91921a6ba0d71abf83b0486c2ac47ff0a1d23\": rpc error: code = NotFound desc = could not find container \"9382fe6d1138e55ab14facf039c91921a6ba0d71abf83b0486c2ac47ff0a1d23\": container with ID starting with 9382fe6d1138e55ab14facf039c91921a6ba0d71abf83b0486c2ac47ff0a1d23 not found: ID does not exist" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.697035 4775 scope.go:117] "RemoveContainer" containerID="511a0675712bb53fb440f8e86c2c3486d8344c814fbbf7adcac683ba919802df" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.697483 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.735137 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bbfaeee-c2d4-472c-a3da-5e055c5ecf08-config-data" (OuterVolumeSpecName: "config-data") pod "7bbfaeee-c2d4-472c-a3da-5e055c5ecf08" (UID: "7bbfaeee-c2d4-472c-a3da-5e055c5ecf08"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.735250 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:31:50 crc kubenswrapper[4775]: E0123 14:31:50.735994 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08da1273-e72a-44f8-82d2-adf17cee8644" containerName="nova-kuttl-api-log" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.736041 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="08da1273-e72a-44f8-82d2-adf17cee8644" containerName="nova-kuttl-api-log" Jan 23 14:31:50 crc kubenswrapper[4775]: E0123 14:31:50.736081 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bbfaeee-c2d4-472c-a3da-5e055c5ecf08" containerName="nova-kuttl-metadata-log" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.736100 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bbfaeee-c2d4-472c-a3da-5e055c5ecf08" containerName="nova-kuttl-metadata-log" Jan 23 14:31:50 crc kubenswrapper[4775]: E0123 14:31:50.736151 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bbfaeee-c2d4-472c-a3da-5e055c5ecf08" containerName="nova-kuttl-metadata-metadata" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.736168 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bbfaeee-c2d4-472c-a3da-5e055c5ecf08" containerName="nova-kuttl-metadata-metadata" Jan 23 14:31:50 crc kubenswrapper[4775]: E0123 14:31:50.736199 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f751d2a1-4497-4fb2-9c13-af54db584a48" containerName="nova-manage" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.736216 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f751d2a1-4497-4fb2-9c13-af54db584a48" containerName="nova-manage" Jan 23 14:31:50 crc kubenswrapper[4775]: E0123 14:31:50.736261 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08da1273-e72a-44f8-82d2-adf17cee8644" containerName="nova-kuttl-api-api" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.736278 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="08da1273-e72a-44f8-82d2-adf17cee8644" containerName="nova-kuttl-api-api" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.736650 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="08da1273-e72a-44f8-82d2-adf17cee8644" containerName="nova-kuttl-api-api" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.736696 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f751d2a1-4497-4fb2-9c13-af54db584a48" containerName="nova-manage" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.736726 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bbfaeee-c2d4-472c-a3da-5e055c5ecf08" containerName="nova-kuttl-metadata-log" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.736741 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="08da1273-e72a-44f8-82d2-adf17cee8644" containerName="nova-kuttl-api-log" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.736762 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bbfaeee-c2d4-472c-a3da-5e055c5ecf08" containerName="nova-kuttl-metadata-metadata" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.738938 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.741529 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.745033 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.759789 4775 scope.go:117] "RemoveContainer" containerID="bcd640f910212325f3c292b1c939f69d1c85e3171183fd1de93071b9ac6fadd7" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.784072 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bbfaeee-c2d4-472c-a3da-5e055c5ecf08-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.784121 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5j8cr\" (UniqueName: \"kubernetes.io/projected/7bbfaeee-c2d4-472c-a3da-5e055c5ecf08-kube-api-access-5j8cr\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.784143 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bbfaeee-c2d4-472c-a3da-5e055c5ecf08-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.790990 4775 scope.go:117] "RemoveContainer" containerID="511a0675712bb53fb440f8e86c2c3486d8344c814fbbf7adcac683ba919802df" Jan 23 14:31:50 crc kubenswrapper[4775]: E0123 14:31:50.791845 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"511a0675712bb53fb440f8e86c2c3486d8344c814fbbf7adcac683ba919802df\": container with ID starting with 511a0675712bb53fb440f8e86c2c3486d8344c814fbbf7adcac683ba919802df not found: ID does not exist" containerID="511a0675712bb53fb440f8e86c2c3486d8344c814fbbf7adcac683ba919802df" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.791898 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"511a0675712bb53fb440f8e86c2c3486d8344c814fbbf7adcac683ba919802df"} err="failed to get container status \"511a0675712bb53fb440f8e86c2c3486d8344c814fbbf7adcac683ba919802df\": rpc error: code = NotFound desc = could not find container \"511a0675712bb53fb440f8e86c2c3486d8344c814fbbf7adcac683ba919802df\": container with ID starting with 511a0675712bb53fb440f8e86c2c3486d8344c814fbbf7adcac683ba919802df not found: ID does not exist" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.791931 4775 scope.go:117] "RemoveContainer" containerID="bcd640f910212325f3c292b1c939f69d1c85e3171183fd1de93071b9ac6fadd7" Jan 23 14:31:50 crc kubenswrapper[4775]: E0123 14:31:50.793387 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcd640f910212325f3c292b1c939f69d1c85e3171183fd1de93071b9ac6fadd7\": container with ID starting with bcd640f910212325f3c292b1c939f69d1c85e3171183fd1de93071b9ac6fadd7 not found: ID does not exist" containerID="bcd640f910212325f3c292b1c939f69d1c85e3171183fd1de93071b9ac6fadd7" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.793496 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcd640f910212325f3c292b1c939f69d1c85e3171183fd1de93071b9ac6fadd7"} err="failed to get container status \"bcd640f910212325f3c292b1c939f69d1c85e3171183fd1de93071b9ac6fadd7\": rpc error: code = NotFound desc = could not find container \"bcd640f910212325f3c292b1c939f69d1c85e3171183fd1de93071b9ac6fadd7\": container with ID starting with bcd640f910212325f3c292b1c939f69d1c85e3171183fd1de93071b9ac6fadd7 not found: ID does not exist" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.793605 4775 scope.go:117] "RemoveContainer" containerID="511a0675712bb53fb440f8e86c2c3486d8344c814fbbf7adcac683ba919802df" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.793950 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"511a0675712bb53fb440f8e86c2c3486d8344c814fbbf7adcac683ba919802df"} err="failed to get container status \"511a0675712bb53fb440f8e86c2c3486d8344c814fbbf7adcac683ba919802df\": rpc error: code = NotFound desc = could not find container \"511a0675712bb53fb440f8e86c2c3486d8344c814fbbf7adcac683ba919802df\": container with ID starting with 511a0675712bb53fb440f8e86c2c3486d8344c814fbbf7adcac683ba919802df not found: ID does not exist" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.793976 4775 scope.go:117] "RemoveContainer" containerID="bcd640f910212325f3c292b1c939f69d1c85e3171183fd1de93071b9ac6fadd7" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.794721 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcd640f910212325f3c292b1c939f69d1c85e3171183fd1de93071b9ac6fadd7"} err="failed to get container status \"bcd640f910212325f3c292b1c939f69d1c85e3171183fd1de93071b9ac6fadd7\": rpc error: code = NotFound desc = could not find container \"bcd640f910212325f3c292b1c939f69d1c85e3171183fd1de93071b9ac6fadd7\": container with ID starting with bcd640f910212325f3c292b1c939f69d1c85e3171183fd1de93071b9ac6fadd7 not found: ID does not exist" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.885382 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d3bad13-3a3b-481d-bdf4-b489422eb398-config-data\") pod \"nova-kuttl-api-0\" (UID: \"0d3bad13-3a3b-481d-bdf4-b489422eb398\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.885689 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbbc7\" (UniqueName: \"kubernetes.io/projected/0d3bad13-3a3b-481d-bdf4-b489422eb398-kube-api-access-tbbc7\") pod \"nova-kuttl-api-0\" (UID: \"0d3bad13-3a3b-481d-bdf4-b489422eb398\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.885868 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d3bad13-3a3b-481d-bdf4-b489422eb398-logs\") pod \"nova-kuttl-api-0\" (UID: \"0d3bad13-3a3b-481d-bdf4-b489422eb398\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.943925 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.951873 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.971361 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.972908 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.987313 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d3bad13-3a3b-481d-bdf4-b489422eb398-config-data\") pod \"nova-kuttl-api-0\" (UID: \"0d3bad13-3a3b-481d-bdf4-b489422eb398\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.987371 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbbc7\" (UniqueName: \"kubernetes.io/projected/0d3bad13-3a3b-481d-bdf4-b489422eb398-kube-api-access-tbbc7\") pod \"nova-kuttl-api-0\" (UID: \"0d3bad13-3a3b-481d-bdf4-b489422eb398\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.987435 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d3bad13-3a3b-481d-bdf4-b489422eb398-logs\") pod \"nova-kuttl-api-0\" (UID: \"0d3bad13-3a3b-481d-bdf4-b489422eb398\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.987900 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d3bad13-3a3b-481d-bdf4-b489422eb398-logs\") pod \"nova-kuttl-api-0\" (UID: \"0d3bad13-3a3b-481d-bdf4-b489422eb398\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:50 crc kubenswrapper[4775]: I0123 14:31:50.993664 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d3bad13-3a3b-481d-bdf4-b489422eb398-config-data\") pod \"nova-kuttl-api-0\" (UID: \"0d3bad13-3a3b-481d-bdf4-b489422eb398\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:51 crc kubenswrapper[4775]: I0123 14:31:51.014291 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 23 14:31:51 crc kubenswrapper[4775]: I0123 14:31:51.019326 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbbc7\" (UniqueName: \"kubernetes.io/projected/0d3bad13-3a3b-481d-bdf4-b489422eb398-kube-api-access-tbbc7\") pod \"nova-kuttl-api-0\" (UID: \"0d3bad13-3a3b-481d-bdf4-b489422eb398\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:51 crc kubenswrapper[4775]: I0123 14:31:51.027129 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:31:51 crc kubenswrapper[4775]: I0123 14:31:51.069664 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:31:51 crc kubenswrapper[4775]: I0123 14:31:51.092598 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8k7g\" (UniqueName: \"kubernetes.io/projected/09f88f54-b0df-4938-a185-e104d3da129f-kube-api-access-g8k7g\") pod \"nova-kuttl-metadata-0\" (UID: \"09f88f54-b0df-4938-a185-e104d3da129f\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:51 crc kubenswrapper[4775]: I0123 14:31:51.092672 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09f88f54-b0df-4938-a185-e104d3da129f-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"09f88f54-b0df-4938-a185-e104d3da129f\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:51 crc kubenswrapper[4775]: I0123 14:31:51.092711 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09f88f54-b0df-4938-a185-e104d3da129f-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"09f88f54-b0df-4938-a185-e104d3da129f\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:51 crc kubenswrapper[4775]: I0123 14:31:51.193545 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09f88f54-b0df-4938-a185-e104d3da129f-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"09f88f54-b0df-4938-a185-e104d3da129f\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:51 crc kubenswrapper[4775]: I0123 14:31:51.193617 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09f88f54-b0df-4938-a185-e104d3da129f-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"09f88f54-b0df-4938-a185-e104d3da129f\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:51 crc kubenswrapper[4775]: I0123 14:31:51.195997 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8k7g\" (UniqueName: \"kubernetes.io/projected/09f88f54-b0df-4938-a185-e104d3da129f-kube-api-access-g8k7g\") pod \"nova-kuttl-metadata-0\" (UID: \"09f88f54-b0df-4938-a185-e104d3da129f\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:51 crc kubenswrapper[4775]: I0123 14:31:51.197929 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09f88f54-b0df-4938-a185-e104d3da129f-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"09f88f54-b0df-4938-a185-e104d3da129f\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:51 crc kubenswrapper[4775]: I0123 14:31:51.198701 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09f88f54-b0df-4938-a185-e104d3da129f-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"09f88f54-b0df-4938-a185-e104d3da129f\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:51 crc kubenswrapper[4775]: I0123 14:31:51.222175 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8k7g\" (UniqueName: \"kubernetes.io/projected/09f88f54-b0df-4938-a185-e104d3da129f-kube-api-access-g8k7g\") pod \"nova-kuttl-metadata-0\" (UID: \"09f88f54-b0df-4938-a185-e104d3da129f\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:51 crc kubenswrapper[4775]: I0123 14:31:51.291780 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:51 crc kubenswrapper[4775]: I0123 14:31:51.531673 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:31:51 crc kubenswrapper[4775]: I0123 14:31:51.626866 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"0d3bad13-3a3b-481d-bdf4-b489422eb398","Type":"ContainerStarted","Data":"c0e734d13db605e2174d6c175ee9bc97984c9197667104eb9fbac9e883c62175"} Jan 23 14:31:51 crc kubenswrapper[4775]: I0123 14:31:51.729639 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08da1273-e72a-44f8-82d2-adf17cee8644" path="/var/lib/kubelet/pods/08da1273-e72a-44f8-82d2-adf17cee8644/volumes" Jan 23 14:31:51 crc kubenswrapper[4775]: I0123 14:31:51.731793 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bbfaeee-c2d4-472c-a3da-5e055c5ecf08" path="/var/lib/kubelet/pods/7bbfaeee-c2d4-472c-a3da-5e055c5ecf08/volumes" Jan 23 14:31:51 crc kubenswrapper[4775]: I0123 14:31:51.741580 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:31:51 crc kubenswrapper[4775]: W0123 14:31:51.743208 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09f88f54_b0df_4938_a185_e104d3da129f.slice/crio-bd79ebf6c58c98b770d795796bd533c1b69a9ee304177bb14c2b8c21e15ca799 WatchSource:0}: Error finding container bd79ebf6c58c98b770d795796bd533c1b69a9ee304177bb14c2b8c21e15ca799: Status 404 returned error can't find the container with id bd79ebf6c58c98b770d795796bd533c1b69a9ee304177bb14c2b8c21e15ca799 Jan 23 14:31:52 crc kubenswrapper[4775]: I0123 14:31:52.056978 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:31:52 crc kubenswrapper[4775]: I0123 14:31:52.082347 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:31:52 crc kubenswrapper[4775]: I0123 14:31:52.642515 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"0d3bad13-3a3b-481d-bdf4-b489422eb398","Type":"ContainerStarted","Data":"103670e5116f644eb28979803164ccf24c988eb5cc7579ff5f555659c22ef470"} Jan 23 14:31:52 crc kubenswrapper[4775]: I0123 14:31:52.642999 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"0d3bad13-3a3b-481d-bdf4-b489422eb398","Type":"ContainerStarted","Data":"ba05a0d9c4b290261a774649af75722313f5b48ad074a065cb9b6c8bab2da2a2"} Jan 23 14:31:52 crc kubenswrapper[4775]: I0123 14:31:52.646113 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"09f88f54-b0df-4938-a185-e104d3da129f","Type":"ContainerStarted","Data":"911eac7827470d33639c04aeb00f69e90569747c8131bfa8a5ec515539b3db54"} Jan 23 14:31:52 crc kubenswrapper[4775]: I0123 14:31:52.646159 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"09f88f54-b0df-4938-a185-e104d3da129f","Type":"ContainerStarted","Data":"42e590fbdd808de903331a89bedf41b486f926b43ce507f437a540852724aa36"} Jan 23 14:31:52 crc kubenswrapper[4775]: I0123 14:31:52.646185 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"09f88f54-b0df-4938-a185-e104d3da129f","Type":"ContainerStarted","Data":"bd79ebf6c58c98b770d795796bd533c1b69a9ee304177bb14c2b8c21e15ca799"} Jan 23 14:31:52 crc kubenswrapper[4775]: I0123 14:31:52.662363 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:31:52 crc kubenswrapper[4775]: I0123 14:31:52.672397 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.672373938 podStartE2EDuration="2.672373938s" podCreationTimestamp="2026-01-23 14:31:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:31:52.66832464 +0000 UTC m=+1659.663153420" watchObservedRunningTime="2026-01-23 14:31:52.672373938 +0000 UTC m=+1659.667202708" Jan 23 14:31:52 crc kubenswrapper[4775]: I0123 14:31:52.712688 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=2.712653605 podStartE2EDuration="2.712653605s" podCreationTimestamp="2026-01-23 14:31:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:31:52.698426545 +0000 UTC m=+1659.693255345" watchObservedRunningTime="2026-01-23 14:31:52.712653605 +0000 UTC m=+1659.707482385" Jan 23 14:31:53 crc kubenswrapper[4775]: I0123 14:31:53.667069 4775 generic.go:334] "Generic (PLEG): container finished" podID="3d46934b-df3e-4beb-b74c-0c4c0d568ec4" containerID="e4688d8f9959793b3c09c75ee759bf5f6942cfd383400a35a6a02f55e85b0d1d" exitCode=0 Jan 23 14:31:53 crc kubenswrapper[4775]: I0123 14:31:53.667197 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"3d46934b-df3e-4beb-b74c-0c4c0d568ec4","Type":"ContainerDied","Data":"e4688d8f9959793b3c09c75ee759bf5f6942cfd383400a35a6a02f55e85b0d1d"} Jan 23 14:31:53 crc kubenswrapper[4775]: I0123 14:31:53.822581 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:31:53 crc kubenswrapper[4775]: I0123 14:31:53.843491 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d46934b-df3e-4beb-b74c-0c4c0d568ec4-config-data\") pod \"3d46934b-df3e-4beb-b74c-0c4c0d568ec4\" (UID: \"3d46934b-df3e-4beb-b74c-0c4c0d568ec4\") " Jan 23 14:31:53 crc kubenswrapper[4775]: I0123 14:31:53.843690 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zdws\" (UniqueName: \"kubernetes.io/projected/3d46934b-df3e-4beb-b74c-0c4c0d568ec4-kube-api-access-7zdws\") pod \"3d46934b-df3e-4beb-b74c-0c4c0d568ec4\" (UID: \"3d46934b-df3e-4beb-b74c-0c4c0d568ec4\") " Jan 23 14:31:53 crc kubenswrapper[4775]: I0123 14:31:53.856618 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d46934b-df3e-4beb-b74c-0c4c0d568ec4-kube-api-access-7zdws" (OuterVolumeSpecName: "kube-api-access-7zdws") pod "3d46934b-df3e-4beb-b74c-0c4c0d568ec4" (UID: "3d46934b-df3e-4beb-b74c-0c4c0d568ec4"). InnerVolumeSpecName "kube-api-access-7zdws". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:31:53 crc kubenswrapper[4775]: I0123 14:31:53.888568 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d46934b-df3e-4beb-b74c-0c4c0d568ec4-config-data" (OuterVolumeSpecName: "config-data") pod "3d46934b-df3e-4beb-b74c-0c4c0d568ec4" (UID: "3d46934b-df3e-4beb-b74c-0c4c0d568ec4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:31:53 crc kubenswrapper[4775]: I0123 14:31:53.946105 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d46934b-df3e-4beb-b74c-0c4c0d568ec4-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:53 crc kubenswrapper[4775]: I0123 14:31:53.946167 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zdws\" (UniqueName: \"kubernetes.io/projected/3d46934b-df3e-4beb-b74c-0c4c0d568ec4-kube-api-access-7zdws\") on node \"crc\" DevicePath \"\"" Jan 23 14:31:54 crc kubenswrapper[4775]: I0123 14:31:54.687146 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"3d46934b-df3e-4beb-b74c-0c4c0d568ec4","Type":"ContainerDied","Data":"b0a7899d7e01d16f0552c389419e173609b2f257ffd2f8c9231f3ed21a6bb023"} Jan 23 14:31:54 crc kubenswrapper[4775]: I0123 14:31:54.687228 4775 scope.go:117] "RemoveContainer" containerID="e4688d8f9959793b3c09c75ee759bf5f6942cfd383400a35a6a02f55e85b0d1d" Jan 23 14:31:54 crc kubenswrapper[4775]: I0123 14:31:54.687240 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:31:54 crc kubenswrapper[4775]: I0123 14:31:54.751903 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:31:54 crc kubenswrapper[4775]: I0123 14:31:54.772177 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:31:54 crc kubenswrapper[4775]: I0123 14:31:54.781125 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:31:54 crc kubenswrapper[4775]: E0123 14:31:54.791206 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d46934b-df3e-4beb-b74c-0c4c0d568ec4" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:31:54 crc kubenswrapper[4775]: I0123 14:31:54.791249 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d46934b-df3e-4beb-b74c-0c4c0d568ec4" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:31:54 crc kubenswrapper[4775]: I0123 14:31:54.791518 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d46934b-df3e-4beb-b74c-0c4c0d568ec4" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:31:54 crc kubenswrapper[4775]: I0123 14:31:54.792225 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:31:54 crc kubenswrapper[4775]: I0123 14:31:54.792438 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:31:54 crc kubenswrapper[4775]: I0123 14:31:54.794792 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 23 14:31:54 crc kubenswrapper[4775]: I0123 14:31:54.869072 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trfqx\" (UniqueName: \"kubernetes.io/projected/76b301e2-214f-47ac-99b1-2cc76488c253-kube-api-access-trfqx\") pod \"nova-kuttl-scheduler-0\" (UID: \"76b301e2-214f-47ac-99b1-2cc76488c253\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:31:54 crc kubenswrapper[4775]: I0123 14:31:54.869221 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76b301e2-214f-47ac-99b1-2cc76488c253-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"76b301e2-214f-47ac-99b1-2cc76488c253\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:31:54 crc kubenswrapper[4775]: I0123 14:31:54.970893 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trfqx\" (UniqueName: \"kubernetes.io/projected/76b301e2-214f-47ac-99b1-2cc76488c253-kube-api-access-trfqx\") pod \"nova-kuttl-scheduler-0\" (UID: \"76b301e2-214f-47ac-99b1-2cc76488c253\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:31:54 crc kubenswrapper[4775]: I0123 14:31:54.970995 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76b301e2-214f-47ac-99b1-2cc76488c253-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"76b301e2-214f-47ac-99b1-2cc76488c253\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:31:54 crc kubenswrapper[4775]: I0123 14:31:54.976668 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76b301e2-214f-47ac-99b1-2cc76488c253-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"76b301e2-214f-47ac-99b1-2cc76488c253\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:31:54 crc kubenswrapper[4775]: I0123 14:31:54.991371 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trfqx\" (UniqueName: \"kubernetes.io/projected/76b301e2-214f-47ac-99b1-2cc76488c253-kube-api-access-trfqx\") pod \"nova-kuttl-scheduler-0\" (UID: \"76b301e2-214f-47ac-99b1-2cc76488c253\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:31:55 crc kubenswrapper[4775]: I0123 14:31:55.117178 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:31:55 crc kubenswrapper[4775]: W0123 14:31:55.730992 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76b301e2_214f_47ac_99b1_2cc76488c253.slice/crio-c44d31ef626505fe64787a5927f46c125139a4b911b2ce50c292d1e7a21655a0 WatchSource:0}: Error finding container c44d31ef626505fe64787a5927f46c125139a4b911b2ce50c292d1e7a21655a0: Status 404 returned error can't find the container with id c44d31ef626505fe64787a5927f46c125139a4b911b2ce50c292d1e7a21655a0 Jan 23 14:31:55 crc kubenswrapper[4775]: I0123 14:31:55.731674 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d46934b-df3e-4beb-b74c-0c4c0d568ec4" path="/var/lib/kubelet/pods/3d46934b-df3e-4beb-b74c-0c4c0d568ec4/volumes" Jan 23 14:31:55 crc kubenswrapper[4775]: I0123 14:31:55.732998 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:31:56 crc kubenswrapper[4775]: I0123 14:31:56.292359 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:56 crc kubenswrapper[4775]: I0123 14:31:56.294039 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:31:56 crc kubenswrapper[4775]: I0123 14:31:56.739007 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"76b301e2-214f-47ac-99b1-2cc76488c253","Type":"ContainerStarted","Data":"3b5691326bb0d178b840a8fa1eaea852a75d863f4e1bb47f05120caae1fc9a32"} Jan 23 14:31:56 crc kubenswrapper[4775]: I0123 14:31:56.739075 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"76b301e2-214f-47ac-99b1-2cc76488c253","Type":"ContainerStarted","Data":"c44d31ef626505fe64787a5927f46c125139a4b911b2ce50c292d1e7a21655a0"} Jan 23 14:31:56 crc kubenswrapper[4775]: I0123 14:31:56.766792 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=2.76677014 podStartE2EDuration="2.76677014s" podCreationTimestamp="2026-01-23 14:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:31:56.764084418 +0000 UTC m=+1663.758913218" watchObservedRunningTime="2026-01-23 14:31:56.76677014 +0000 UTC m=+1663.761598890" Jan 23 14:31:58 crc kubenswrapper[4775]: I0123 14:31:58.479918 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:31:59 crc kubenswrapper[4775]: I0123 14:31:59.102243 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt"] Jan 23 14:31:59 crc kubenswrapper[4775]: I0123 14:31:59.103462 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt" Jan 23 14:31:59 crc kubenswrapper[4775]: I0123 14:31:59.106017 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-scripts" Jan 23 14:31:59 crc kubenswrapper[4775]: I0123 14:31:59.106721 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-config-data" Jan 23 14:31:59 crc kubenswrapper[4775]: I0123 14:31:59.123488 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt"] Jan 23 14:31:59 crc kubenswrapper[4775]: I0123 14:31:59.246945 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4vcv\" (UniqueName: \"kubernetes.io/projected/bc9f9b55-ea71-4396-82bf-2a49788ccc42-kube-api-access-g4vcv\") pod \"nova-kuttl-cell1-cell-mapping-vtvrt\" (UID: \"bc9f9b55-ea71-4396-82bf-2a49788ccc42\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt" Jan 23 14:31:59 crc kubenswrapper[4775]: I0123 14:31:59.247038 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc9f9b55-ea71-4396-82bf-2a49788ccc42-scripts\") pod \"nova-kuttl-cell1-cell-mapping-vtvrt\" (UID: \"bc9f9b55-ea71-4396-82bf-2a49788ccc42\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt" Jan 23 14:31:59 crc kubenswrapper[4775]: I0123 14:31:59.247109 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc9f9b55-ea71-4396-82bf-2a49788ccc42-config-data\") pod \"nova-kuttl-cell1-cell-mapping-vtvrt\" (UID: \"bc9f9b55-ea71-4396-82bf-2a49788ccc42\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt" Jan 23 14:31:59 crc kubenswrapper[4775]: I0123 14:31:59.349615 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4vcv\" (UniqueName: \"kubernetes.io/projected/bc9f9b55-ea71-4396-82bf-2a49788ccc42-kube-api-access-g4vcv\") pod \"nova-kuttl-cell1-cell-mapping-vtvrt\" (UID: \"bc9f9b55-ea71-4396-82bf-2a49788ccc42\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt" Jan 23 14:31:59 crc kubenswrapper[4775]: I0123 14:31:59.349717 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc9f9b55-ea71-4396-82bf-2a49788ccc42-scripts\") pod \"nova-kuttl-cell1-cell-mapping-vtvrt\" (UID: \"bc9f9b55-ea71-4396-82bf-2a49788ccc42\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt" Jan 23 14:31:59 crc kubenswrapper[4775]: I0123 14:31:59.349834 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc9f9b55-ea71-4396-82bf-2a49788ccc42-config-data\") pod \"nova-kuttl-cell1-cell-mapping-vtvrt\" (UID: \"bc9f9b55-ea71-4396-82bf-2a49788ccc42\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt" Jan 23 14:31:59 crc kubenswrapper[4775]: I0123 14:31:59.359915 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc9f9b55-ea71-4396-82bf-2a49788ccc42-scripts\") pod \"nova-kuttl-cell1-cell-mapping-vtvrt\" (UID: \"bc9f9b55-ea71-4396-82bf-2a49788ccc42\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt" Jan 23 14:31:59 crc kubenswrapper[4775]: I0123 14:31:59.360124 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc9f9b55-ea71-4396-82bf-2a49788ccc42-config-data\") pod \"nova-kuttl-cell1-cell-mapping-vtvrt\" (UID: \"bc9f9b55-ea71-4396-82bf-2a49788ccc42\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt" Jan 23 14:31:59 crc kubenswrapper[4775]: I0123 14:31:59.382331 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4vcv\" (UniqueName: \"kubernetes.io/projected/bc9f9b55-ea71-4396-82bf-2a49788ccc42-kube-api-access-g4vcv\") pod \"nova-kuttl-cell1-cell-mapping-vtvrt\" (UID: \"bc9f9b55-ea71-4396-82bf-2a49788ccc42\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt" Jan 23 14:31:59 crc kubenswrapper[4775]: I0123 14:31:59.438334 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt" Jan 23 14:31:59 crc kubenswrapper[4775]: I0123 14:31:59.925519 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt"] Jan 23 14:31:59 crc kubenswrapper[4775]: W0123 14:31:59.926397 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc9f9b55_ea71_4396_82bf_2a49788ccc42.slice/crio-8826b475b7f4985e9df7c5956361970ec9965a4aa8e0ce85f18a8f4f7a7db30f WatchSource:0}: Error finding container 8826b475b7f4985e9df7c5956361970ec9965a4aa8e0ce85f18a8f4f7a7db30f: Status 404 returned error can't find the container with id 8826b475b7f4985e9df7c5956361970ec9965a4aa8e0ce85f18a8f4f7a7db30f Jan 23 14:32:00 crc kubenswrapper[4775]: I0123 14:32:00.119960 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:32:00 crc kubenswrapper[4775]: I0123 14:32:00.788465 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt" event={"ID":"bc9f9b55-ea71-4396-82bf-2a49788ccc42","Type":"ContainerStarted","Data":"af2e3d2fa526f083ebc61856e091755e854affc68850f0ccf9dc55db4575410a"} Jan 23 14:32:00 crc kubenswrapper[4775]: I0123 14:32:00.788527 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt" event={"ID":"bc9f9b55-ea71-4396-82bf-2a49788ccc42","Type":"ContainerStarted","Data":"8826b475b7f4985e9df7c5956361970ec9965a4aa8e0ce85f18a8f4f7a7db30f"} Jan 23 14:32:00 crc kubenswrapper[4775]: I0123 14:32:00.828306 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt" podStartSLOduration=1.828277971 podStartE2EDuration="1.828277971s" podCreationTimestamp="2026-01-23 14:31:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:32:00.819143097 +0000 UTC m=+1667.813971877" watchObservedRunningTime="2026-01-23 14:32:00.828277971 +0000 UTC m=+1667.823106751" Jan 23 14:32:01 crc kubenswrapper[4775]: I0123 14:32:01.070681 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:01 crc kubenswrapper[4775]: I0123 14:32:01.072496 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:01 crc kubenswrapper[4775]: I0123 14:32:01.292692 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:01 crc kubenswrapper[4775]: I0123 14:32:01.292748 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:02 crc kubenswrapper[4775]: I0123 14:32:02.112075 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="0d3bad13-3a3b-481d-bdf4-b489422eb398" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.159:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:32:02 crc kubenswrapper[4775]: I0123 14:32:02.153107 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="0d3bad13-3a3b-481d-bdf4-b489422eb398" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.159:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:32:02 crc kubenswrapper[4775]: I0123 14:32:02.375070 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="09f88f54-b0df-4938-a185-e104d3da129f" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.160:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:32:02 crc kubenswrapper[4775]: I0123 14:32:02.375011 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="09f88f54-b0df-4938-a185-e104d3da129f" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.160:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:32:02 crc kubenswrapper[4775]: I0123 14:32:02.713944 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:32:02 crc kubenswrapper[4775]: E0123 14:32:02.714176 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:32:04 crc kubenswrapper[4775]: I0123 14:32:04.827551 4775 generic.go:334] "Generic (PLEG): container finished" podID="bc9f9b55-ea71-4396-82bf-2a49788ccc42" containerID="af2e3d2fa526f083ebc61856e091755e854affc68850f0ccf9dc55db4575410a" exitCode=0 Jan 23 14:32:04 crc kubenswrapper[4775]: I0123 14:32:04.827620 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt" event={"ID":"bc9f9b55-ea71-4396-82bf-2a49788ccc42","Type":"ContainerDied","Data":"af2e3d2fa526f083ebc61856e091755e854affc68850f0ccf9dc55db4575410a"} Jan 23 14:32:05 crc kubenswrapper[4775]: I0123 14:32:05.118279 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:32:05 crc kubenswrapper[4775]: I0123 14:32:05.154173 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:32:05 crc kubenswrapper[4775]: I0123 14:32:05.894296 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:32:06 crc kubenswrapper[4775]: I0123 14:32:06.315115 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt" Jan 23 14:32:06 crc kubenswrapper[4775]: I0123 14:32:06.480505 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc9f9b55-ea71-4396-82bf-2a49788ccc42-scripts\") pod \"bc9f9b55-ea71-4396-82bf-2a49788ccc42\" (UID: \"bc9f9b55-ea71-4396-82bf-2a49788ccc42\") " Jan 23 14:32:06 crc kubenswrapper[4775]: I0123 14:32:06.480856 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc9f9b55-ea71-4396-82bf-2a49788ccc42-config-data\") pod \"bc9f9b55-ea71-4396-82bf-2a49788ccc42\" (UID: \"bc9f9b55-ea71-4396-82bf-2a49788ccc42\") " Jan 23 14:32:06 crc kubenswrapper[4775]: I0123 14:32:06.480976 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4vcv\" (UniqueName: \"kubernetes.io/projected/bc9f9b55-ea71-4396-82bf-2a49788ccc42-kube-api-access-g4vcv\") pod \"bc9f9b55-ea71-4396-82bf-2a49788ccc42\" (UID: \"bc9f9b55-ea71-4396-82bf-2a49788ccc42\") " Jan 23 14:32:06 crc kubenswrapper[4775]: I0123 14:32:06.490130 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc9f9b55-ea71-4396-82bf-2a49788ccc42-scripts" (OuterVolumeSpecName: "scripts") pod "bc9f9b55-ea71-4396-82bf-2a49788ccc42" (UID: "bc9f9b55-ea71-4396-82bf-2a49788ccc42"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:32:06 crc kubenswrapper[4775]: I0123 14:32:06.503365 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc9f9b55-ea71-4396-82bf-2a49788ccc42-kube-api-access-g4vcv" (OuterVolumeSpecName: "kube-api-access-g4vcv") pod "bc9f9b55-ea71-4396-82bf-2a49788ccc42" (UID: "bc9f9b55-ea71-4396-82bf-2a49788ccc42"). InnerVolumeSpecName "kube-api-access-g4vcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:32:06 crc kubenswrapper[4775]: I0123 14:32:06.531082 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc9f9b55-ea71-4396-82bf-2a49788ccc42-config-data" (OuterVolumeSpecName: "config-data") pod "bc9f9b55-ea71-4396-82bf-2a49788ccc42" (UID: "bc9f9b55-ea71-4396-82bf-2a49788ccc42"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:32:06 crc kubenswrapper[4775]: I0123 14:32:06.583290 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4vcv\" (UniqueName: \"kubernetes.io/projected/bc9f9b55-ea71-4396-82bf-2a49788ccc42-kube-api-access-g4vcv\") on node \"crc\" DevicePath \"\"" Jan 23 14:32:06 crc kubenswrapper[4775]: I0123 14:32:06.583331 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc9f9b55-ea71-4396-82bf-2a49788ccc42-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:32:06 crc kubenswrapper[4775]: I0123 14:32:06.583344 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc9f9b55-ea71-4396-82bf-2a49788ccc42-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:32:06 crc kubenswrapper[4775]: I0123 14:32:06.853614 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt" Jan 23 14:32:06 crc kubenswrapper[4775]: I0123 14:32:06.853597 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt" event={"ID":"bc9f9b55-ea71-4396-82bf-2a49788ccc42","Type":"ContainerDied","Data":"8826b475b7f4985e9df7c5956361970ec9965a4aa8e0ce85f18a8f4f7a7db30f"} Jan 23 14:32:06 crc kubenswrapper[4775]: I0123 14:32:06.853740 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8826b475b7f4985e9df7c5956361970ec9965a4aa8e0ce85f18a8f4f7a7db30f" Jan 23 14:32:07 crc kubenswrapper[4775]: I0123 14:32:07.060105 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:32:07 crc kubenswrapper[4775]: I0123 14:32:07.060382 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="0d3bad13-3a3b-481d-bdf4-b489422eb398" containerName="nova-kuttl-api-log" containerID="cri-o://ba05a0d9c4b290261a774649af75722313f5b48ad074a065cb9b6c8bab2da2a2" gracePeriod=30 Jan 23 14:32:07 crc kubenswrapper[4775]: I0123 14:32:07.060862 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="0d3bad13-3a3b-481d-bdf4-b489422eb398" containerName="nova-kuttl-api-api" containerID="cri-o://103670e5116f644eb28979803164ccf24c988eb5cc7579ff5f555659c22ef470" gracePeriod=30 Jan 23 14:32:07 crc kubenswrapper[4775]: I0123 14:32:07.080107 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:32:07 crc kubenswrapper[4775]: I0123 14:32:07.142250 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:32:07 crc kubenswrapper[4775]: I0123 14:32:07.142459 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="09f88f54-b0df-4938-a185-e104d3da129f" containerName="nova-kuttl-metadata-log" containerID="cri-o://42e590fbdd808de903331a89bedf41b486f926b43ce507f437a540852724aa36" gracePeriod=30 Jan 23 14:32:07 crc kubenswrapper[4775]: I0123 14:32:07.142604 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="09f88f54-b0df-4938-a185-e104d3da129f" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://911eac7827470d33639c04aeb00f69e90569747c8131bfa8a5ec515539b3db54" gracePeriod=30 Jan 23 14:32:07 crc kubenswrapper[4775]: I0123 14:32:07.870683 4775 generic.go:334] "Generic (PLEG): container finished" podID="0d3bad13-3a3b-481d-bdf4-b489422eb398" containerID="ba05a0d9c4b290261a774649af75722313f5b48ad074a065cb9b6c8bab2da2a2" exitCode=143 Jan 23 14:32:07 crc kubenswrapper[4775]: I0123 14:32:07.870836 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"0d3bad13-3a3b-481d-bdf4-b489422eb398","Type":"ContainerDied","Data":"ba05a0d9c4b290261a774649af75722313f5b48ad074a065cb9b6c8bab2da2a2"} Jan 23 14:32:07 crc kubenswrapper[4775]: I0123 14:32:07.874314 4775 generic.go:334] "Generic (PLEG): container finished" podID="09f88f54-b0df-4938-a185-e104d3da129f" containerID="42e590fbdd808de903331a89bedf41b486f926b43ce507f437a540852724aa36" exitCode=143 Jan 23 14:32:07 crc kubenswrapper[4775]: I0123 14:32:07.874424 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"09f88f54-b0df-4938-a185-e104d3da129f","Type":"ContainerDied","Data":"42e590fbdd808de903331a89bedf41b486f926b43ce507f437a540852724aa36"} Jan 23 14:32:07 crc kubenswrapper[4775]: I0123 14:32:07.874609 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="76b301e2-214f-47ac-99b1-2cc76488c253" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://3b5691326bb0d178b840a8fa1eaea852a75d863f4e1bb47f05120caae1fc9a32" gracePeriod=30 Jan 23 14:32:10 crc kubenswrapper[4775]: E0123 14:32:10.119842 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3b5691326bb0d178b840a8fa1eaea852a75d863f4e1bb47f05120caae1fc9a32" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 14:32:10 crc kubenswrapper[4775]: E0123 14:32:10.122741 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3b5691326bb0d178b840a8fa1eaea852a75d863f4e1bb47f05120caae1fc9a32" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 14:32:10 crc kubenswrapper[4775]: E0123 14:32:10.124609 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3b5691326bb0d178b840a8fa1eaea852a75d863f4e1bb47f05120caae1fc9a32" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 14:32:10 crc kubenswrapper[4775]: E0123 14:32:10.124695 4775 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="76b301e2-214f-47ac-99b1-2cc76488c253" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.650369 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.667459 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d3bad13-3a3b-481d-bdf4-b489422eb398-config-data\") pod \"0d3bad13-3a3b-481d-bdf4-b489422eb398\" (UID: \"0d3bad13-3a3b-481d-bdf4-b489422eb398\") " Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.667617 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbbc7\" (UniqueName: \"kubernetes.io/projected/0d3bad13-3a3b-481d-bdf4-b489422eb398-kube-api-access-tbbc7\") pod \"0d3bad13-3a3b-481d-bdf4-b489422eb398\" (UID: \"0d3bad13-3a3b-481d-bdf4-b489422eb398\") " Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.667674 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d3bad13-3a3b-481d-bdf4-b489422eb398-logs\") pod \"0d3bad13-3a3b-481d-bdf4-b489422eb398\" (UID: \"0d3bad13-3a3b-481d-bdf4-b489422eb398\") " Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.668880 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d3bad13-3a3b-481d-bdf4-b489422eb398-logs" (OuterVolumeSpecName: "logs") pod "0d3bad13-3a3b-481d-bdf4-b489422eb398" (UID: "0d3bad13-3a3b-481d-bdf4-b489422eb398"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.676633 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d3bad13-3a3b-481d-bdf4-b489422eb398-kube-api-access-tbbc7" (OuterVolumeSpecName: "kube-api-access-tbbc7") pod "0d3bad13-3a3b-481d-bdf4-b489422eb398" (UID: "0d3bad13-3a3b-481d-bdf4-b489422eb398"). InnerVolumeSpecName "kube-api-access-tbbc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.696394 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d3bad13-3a3b-481d-bdf4-b489422eb398-config-data" (OuterVolumeSpecName: "config-data") pod "0d3bad13-3a3b-481d-bdf4-b489422eb398" (UID: "0d3bad13-3a3b-481d-bdf4-b489422eb398"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.746853 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.771088 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09f88f54-b0df-4938-a185-e104d3da129f-logs\") pod \"09f88f54-b0df-4938-a185-e104d3da129f\" (UID: \"09f88f54-b0df-4938-a185-e104d3da129f\") " Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.771222 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09f88f54-b0df-4938-a185-e104d3da129f-config-data\") pod \"09f88f54-b0df-4938-a185-e104d3da129f\" (UID: \"09f88f54-b0df-4938-a185-e104d3da129f\") " Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.771280 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8k7g\" (UniqueName: \"kubernetes.io/projected/09f88f54-b0df-4938-a185-e104d3da129f-kube-api-access-g8k7g\") pod \"09f88f54-b0df-4938-a185-e104d3da129f\" (UID: \"09f88f54-b0df-4938-a185-e104d3da129f\") " Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.772252 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbbc7\" (UniqueName: \"kubernetes.io/projected/0d3bad13-3a3b-481d-bdf4-b489422eb398-kube-api-access-tbbc7\") on node \"crc\" DevicePath \"\"" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.772286 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d3bad13-3a3b-481d-bdf4-b489422eb398-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.772300 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d3bad13-3a3b-481d-bdf4-b489422eb398-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.773785 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09f88f54-b0df-4938-a185-e104d3da129f-logs" (OuterVolumeSpecName: "logs") pod "09f88f54-b0df-4938-a185-e104d3da129f" (UID: "09f88f54-b0df-4938-a185-e104d3da129f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.777125 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09f88f54-b0df-4938-a185-e104d3da129f-kube-api-access-g8k7g" (OuterVolumeSpecName: "kube-api-access-g8k7g") pod "09f88f54-b0df-4938-a185-e104d3da129f" (UID: "09f88f54-b0df-4938-a185-e104d3da129f"). InnerVolumeSpecName "kube-api-access-g8k7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.807313 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09f88f54-b0df-4938-a185-e104d3da129f-config-data" (OuterVolumeSpecName: "config-data") pod "09f88f54-b0df-4938-a185-e104d3da129f" (UID: "09f88f54-b0df-4938-a185-e104d3da129f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.876308 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09f88f54-b0df-4938-a185-e104d3da129f-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.876361 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8k7g\" (UniqueName: \"kubernetes.io/projected/09f88f54-b0df-4938-a185-e104d3da129f-kube-api-access-g8k7g\") on node \"crc\" DevicePath \"\"" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.876385 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09f88f54-b0df-4938-a185-e104d3da129f-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.914183 4775 generic.go:334] "Generic (PLEG): container finished" podID="0d3bad13-3a3b-481d-bdf4-b489422eb398" containerID="103670e5116f644eb28979803164ccf24c988eb5cc7579ff5f555659c22ef470" exitCode=0 Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.914270 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"0d3bad13-3a3b-481d-bdf4-b489422eb398","Type":"ContainerDied","Data":"103670e5116f644eb28979803164ccf24c988eb5cc7579ff5f555659c22ef470"} Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.914301 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.914339 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"0d3bad13-3a3b-481d-bdf4-b489422eb398","Type":"ContainerDied","Data":"c0e734d13db605e2174d6c175ee9bc97984c9197667104eb9fbac9e883c62175"} Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.914370 4775 scope.go:117] "RemoveContainer" containerID="103670e5116f644eb28979803164ccf24c988eb5cc7579ff5f555659c22ef470" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.918917 4775 generic.go:334] "Generic (PLEG): container finished" podID="09f88f54-b0df-4938-a185-e104d3da129f" containerID="911eac7827470d33639c04aeb00f69e90569747c8131bfa8a5ec515539b3db54" exitCode=0 Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.918967 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.918961 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"09f88f54-b0df-4938-a185-e104d3da129f","Type":"ContainerDied","Data":"911eac7827470d33639c04aeb00f69e90569747c8131bfa8a5ec515539b3db54"} Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.919025 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"09f88f54-b0df-4938-a185-e104d3da129f","Type":"ContainerDied","Data":"bd79ebf6c58c98b770d795796bd533c1b69a9ee304177bb14c2b8c21e15ca799"} Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.955112 4775 scope.go:117] "RemoveContainer" containerID="ba05a0d9c4b290261a774649af75722313f5b48ad074a065cb9b6c8bab2da2a2" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.983878 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.996859 4775 scope.go:117] "RemoveContainer" containerID="103670e5116f644eb28979803164ccf24c988eb5cc7579ff5f555659c22ef470" Jan 23 14:32:10 crc kubenswrapper[4775]: E0123 14:32:10.997640 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"103670e5116f644eb28979803164ccf24c988eb5cc7579ff5f555659c22ef470\": container with ID starting with 103670e5116f644eb28979803164ccf24c988eb5cc7579ff5f555659c22ef470 not found: ID does not exist" containerID="103670e5116f644eb28979803164ccf24c988eb5cc7579ff5f555659c22ef470" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.997707 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"103670e5116f644eb28979803164ccf24c988eb5cc7579ff5f555659c22ef470"} err="failed to get container status \"103670e5116f644eb28979803164ccf24c988eb5cc7579ff5f555659c22ef470\": rpc error: code = NotFound desc = could not find container \"103670e5116f644eb28979803164ccf24c988eb5cc7579ff5f555659c22ef470\": container with ID starting with 103670e5116f644eb28979803164ccf24c988eb5cc7579ff5f555659c22ef470 not found: ID does not exist" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.997763 4775 scope.go:117] "RemoveContainer" containerID="ba05a0d9c4b290261a774649af75722313f5b48ad074a065cb9b6c8bab2da2a2" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.997978 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:32:10 crc kubenswrapper[4775]: E0123 14:32:10.998544 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba05a0d9c4b290261a774649af75722313f5b48ad074a065cb9b6c8bab2da2a2\": container with ID starting with ba05a0d9c4b290261a774649af75722313f5b48ad074a065cb9b6c8bab2da2a2 not found: ID does not exist" containerID="ba05a0d9c4b290261a774649af75722313f5b48ad074a065cb9b6c8bab2da2a2" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.998588 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba05a0d9c4b290261a774649af75722313f5b48ad074a065cb9b6c8bab2da2a2"} err="failed to get container status \"ba05a0d9c4b290261a774649af75722313f5b48ad074a065cb9b6c8bab2da2a2\": rpc error: code = NotFound desc = could not find container \"ba05a0d9c4b290261a774649af75722313f5b48ad074a065cb9b6c8bab2da2a2\": container with ID starting with ba05a0d9c4b290261a774649af75722313f5b48ad074a065cb9b6c8bab2da2a2 not found: ID does not exist" Jan 23 14:32:10 crc kubenswrapper[4775]: I0123 14:32:10.998616 4775 scope.go:117] "RemoveContainer" containerID="911eac7827470d33639c04aeb00f69e90569747c8131bfa8a5ec515539b3db54" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.022953 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.038945 4775 scope.go:117] "RemoveContainer" containerID="42e590fbdd808de903331a89bedf41b486f926b43ce507f437a540852724aa36" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.047171 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.054433 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:32:11 crc kubenswrapper[4775]: E0123 14:32:11.054950 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc9f9b55-ea71-4396-82bf-2a49788ccc42" containerName="nova-manage" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.054980 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc9f9b55-ea71-4396-82bf-2a49788ccc42" containerName="nova-manage" Jan 23 14:32:11 crc kubenswrapper[4775]: E0123 14:32:11.055000 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d3bad13-3a3b-481d-bdf4-b489422eb398" containerName="nova-kuttl-api-log" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.055014 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d3bad13-3a3b-481d-bdf4-b489422eb398" containerName="nova-kuttl-api-log" Jan 23 14:32:11 crc kubenswrapper[4775]: E0123 14:32:11.055041 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d3bad13-3a3b-481d-bdf4-b489422eb398" containerName="nova-kuttl-api-api" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.055055 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d3bad13-3a3b-481d-bdf4-b489422eb398" containerName="nova-kuttl-api-api" Jan 23 14:32:11 crc kubenswrapper[4775]: E0123 14:32:11.055101 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09f88f54-b0df-4938-a185-e104d3da129f" containerName="nova-kuttl-metadata-metadata" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.055113 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="09f88f54-b0df-4938-a185-e104d3da129f" containerName="nova-kuttl-metadata-metadata" Jan 23 14:32:11 crc kubenswrapper[4775]: E0123 14:32:11.055136 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09f88f54-b0df-4938-a185-e104d3da129f" containerName="nova-kuttl-metadata-log" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.055148 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="09f88f54-b0df-4938-a185-e104d3da129f" containerName="nova-kuttl-metadata-log" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.055450 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="09f88f54-b0df-4938-a185-e104d3da129f" containerName="nova-kuttl-metadata-log" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.055475 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="09f88f54-b0df-4938-a185-e104d3da129f" containerName="nova-kuttl-metadata-metadata" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.055499 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d3bad13-3a3b-481d-bdf4-b489422eb398" containerName="nova-kuttl-api-api" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.055536 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc9f9b55-ea71-4396-82bf-2a49788ccc42" containerName="nova-manage" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.055549 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d3bad13-3a3b-481d-bdf4-b489422eb398" containerName="nova-kuttl-api-log" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.056903 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.059613 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.065343 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.068637 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.075103 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.078023 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.086061 4775 scope.go:117] "RemoveContainer" containerID="911eac7827470d33639c04aeb00f69e90569747c8131bfa8a5ec515539b3db54" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.087383 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skdhl\" (UniqueName: \"kubernetes.io/projected/08cc29e8-1d83-4f1e-b343-a813a06c7f5a-kube-api-access-skdhl\") pod \"nova-kuttl-metadata-0\" (UID: \"08cc29e8-1d83-4f1e-b343-a813a06c7f5a\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.087465 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08cc29e8-1d83-4f1e-b343-a813a06c7f5a-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"08cc29e8-1d83-4f1e-b343-a813a06c7f5a\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.087592 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08cc29e8-1d83-4f1e-b343-a813a06c7f5a-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"08cc29e8-1d83-4f1e-b343-a813a06c7f5a\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.087735 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:32:11 crc kubenswrapper[4775]: E0123 14:32:11.087796 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"911eac7827470d33639c04aeb00f69e90569747c8131bfa8a5ec515539b3db54\": container with ID starting with 911eac7827470d33639c04aeb00f69e90569747c8131bfa8a5ec515539b3db54 not found: ID does not exist" containerID="911eac7827470d33639c04aeb00f69e90569747c8131bfa8a5ec515539b3db54" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.087856 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"911eac7827470d33639c04aeb00f69e90569747c8131bfa8a5ec515539b3db54"} err="failed to get container status \"911eac7827470d33639c04aeb00f69e90569747c8131bfa8a5ec515539b3db54\": rpc error: code = NotFound desc = could not find container \"911eac7827470d33639c04aeb00f69e90569747c8131bfa8a5ec515539b3db54\": container with ID starting with 911eac7827470d33639c04aeb00f69e90569747c8131bfa8a5ec515539b3db54 not found: ID does not exist" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.087887 4775 scope.go:117] "RemoveContainer" containerID="42e590fbdd808de903331a89bedf41b486f926b43ce507f437a540852724aa36" Jan 23 14:32:11 crc kubenswrapper[4775]: E0123 14:32:11.089499 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42e590fbdd808de903331a89bedf41b486f926b43ce507f437a540852724aa36\": container with ID starting with 42e590fbdd808de903331a89bedf41b486f926b43ce507f437a540852724aa36 not found: ID does not exist" containerID="42e590fbdd808de903331a89bedf41b486f926b43ce507f437a540852724aa36" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.089927 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42e590fbdd808de903331a89bedf41b486f926b43ce507f437a540852724aa36"} err="failed to get container status \"42e590fbdd808de903331a89bedf41b486f926b43ce507f437a540852724aa36\": rpc error: code = NotFound desc = could not find container \"42e590fbdd808de903331a89bedf41b486f926b43ce507f437a540852724aa36\": container with ID starting with 42e590fbdd808de903331a89bedf41b486f926b43ce507f437a540852724aa36 not found: ID does not exist" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.188571 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8da8e70a-bee6-4082-a0c5-8419ea3f86a6-logs\") pod \"nova-kuttl-api-0\" (UID: \"8da8e70a-bee6-4082-a0c5-8419ea3f86a6\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.188901 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skdhl\" (UniqueName: \"kubernetes.io/projected/08cc29e8-1d83-4f1e-b343-a813a06c7f5a-kube-api-access-skdhl\") pod \"nova-kuttl-metadata-0\" (UID: \"08cc29e8-1d83-4f1e-b343-a813a06c7f5a\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.189002 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08cc29e8-1d83-4f1e-b343-a813a06c7f5a-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"08cc29e8-1d83-4f1e-b343-a813a06c7f5a\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.189044 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7gh9\" (UniqueName: \"kubernetes.io/projected/8da8e70a-bee6-4082-a0c5-8419ea3f86a6-kube-api-access-c7gh9\") pod \"nova-kuttl-api-0\" (UID: \"8da8e70a-bee6-4082-a0c5-8419ea3f86a6\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.189104 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8da8e70a-bee6-4082-a0c5-8419ea3f86a6-config-data\") pod \"nova-kuttl-api-0\" (UID: \"8da8e70a-bee6-4082-a0c5-8419ea3f86a6\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.189263 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08cc29e8-1d83-4f1e-b343-a813a06c7f5a-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"08cc29e8-1d83-4f1e-b343-a813a06c7f5a\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.189966 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08cc29e8-1d83-4f1e-b343-a813a06c7f5a-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"08cc29e8-1d83-4f1e-b343-a813a06c7f5a\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.194578 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08cc29e8-1d83-4f1e-b343-a813a06c7f5a-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"08cc29e8-1d83-4f1e-b343-a813a06c7f5a\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.217771 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skdhl\" (UniqueName: \"kubernetes.io/projected/08cc29e8-1d83-4f1e-b343-a813a06c7f5a-kube-api-access-skdhl\") pod \"nova-kuttl-metadata-0\" (UID: \"08cc29e8-1d83-4f1e-b343-a813a06c7f5a\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.291329 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8da8e70a-bee6-4082-a0c5-8419ea3f86a6-logs\") pod \"nova-kuttl-api-0\" (UID: \"8da8e70a-bee6-4082-a0c5-8419ea3f86a6\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.291510 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7gh9\" (UniqueName: \"kubernetes.io/projected/8da8e70a-bee6-4082-a0c5-8419ea3f86a6-kube-api-access-c7gh9\") pod \"nova-kuttl-api-0\" (UID: \"8da8e70a-bee6-4082-a0c5-8419ea3f86a6\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.291618 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8da8e70a-bee6-4082-a0c5-8419ea3f86a6-config-data\") pod \"nova-kuttl-api-0\" (UID: \"8da8e70a-bee6-4082-a0c5-8419ea3f86a6\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.292107 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8da8e70a-bee6-4082-a0c5-8419ea3f86a6-logs\") pod \"nova-kuttl-api-0\" (UID: \"8da8e70a-bee6-4082-a0c5-8419ea3f86a6\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.297570 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8da8e70a-bee6-4082-a0c5-8419ea3f86a6-config-data\") pod \"nova-kuttl-api-0\" (UID: \"8da8e70a-bee6-4082-a0c5-8419ea3f86a6\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.320755 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7gh9\" (UniqueName: \"kubernetes.io/projected/8da8e70a-bee6-4082-a0c5-8419ea3f86a6-kube-api-access-c7gh9\") pod \"nova-kuttl-api-0\" (UID: \"8da8e70a-bee6-4082-a0c5-8419ea3f86a6\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.379670 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.396007 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.730385 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09f88f54-b0df-4938-a185-e104d3da129f" path="/var/lib/kubelet/pods/09f88f54-b0df-4938-a185-e104d3da129f/volumes" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.731194 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d3bad13-3a3b-481d-bdf4-b489422eb398" path="/var/lib/kubelet/pods/0d3bad13-3a3b-481d-bdf4-b489422eb398/volumes" Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.855999 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:32:11 crc kubenswrapper[4775]: W0123 14:32:11.899734 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08cc29e8_1d83_4f1e_b343_a813a06c7f5a.slice/crio-c04673dffc47a353d8b2f30b1c7c3756c9fa915a864e9169df809bc23ac4884f WatchSource:0}: Error finding container c04673dffc47a353d8b2f30b1c7c3756c9fa915a864e9169df809bc23ac4884f: Status 404 returned error can't find the container with id c04673dffc47a353d8b2f30b1c7c3756c9fa915a864e9169df809bc23ac4884f Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.949065 4775 generic.go:334] "Generic (PLEG): container finished" podID="76b301e2-214f-47ac-99b1-2cc76488c253" containerID="3b5691326bb0d178b840a8fa1eaea852a75d863f4e1bb47f05120caae1fc9a32" exitCode=0 Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.949120 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"76b301e2-214f-47ac-99b1-2cc76488c253","Type":"ContainerDied","Data":"3b5691326bb0d178b840a8fa1eaea852a75d863f4e1bb47f05120caae1fc9a32"} Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.963904 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"08cc29e8-1d83-4f1e-b343-a813a06c7f5a","Type":"ContainerStarted","Data":"c04673dffc47a353d8b2f30b1c7c3756c9fa915a864e9169df809bc23ac4884f"} Jan 23 14:32:11 crc kubenswrapper[4775]: I0123 14:32:11.998714 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:32:12 crc kubenswrapper[4775]: I0123 14:32:12.251368 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:32:12 crc kubenswrapper[4775]: I0123 14:32:12.316189 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trfqx\" (UniqueName: \"kubernetes.io/projected/76b301e2-214f-47ac-99b1-2cc76488c253-kube-api-access-trfqx\") pod \"76b301e2-214f-47ac-99b1-2cc76488c253\" (UID: \"76b301e2-214f-47ac-99b1-2cc76488c253\") " Jan 23 14:32:12 crc kubenswrapper[4775]: I0123 14:32:12.316404 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76b301e2-214f-47ac-99b1-2cc76488c253-config-data\") pod \"76b301e2-214f-47ac-99b1-2cc76488c253\" (UID: \"76b301e2-214f-47ac-99b1-2cc76488c253\") " Jan 23 14:32:12 crc kubenswrapper[4775]: I0123 14:32:12.322704 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76b301e2-214f-47ac-99b1-2cc76488c253-kube-api-access-trfqx" (OuterVolumeSpecName: "kube-api-access-trfqx") pod "76b301e2-214f-47ac-99b1-2cc76488c253" (UID: "76b301e2-214f-47ac-99b1-2cc76488c253"). InnerVolumeSpecName "kube-api-access-trfqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:32:12 crc kubenswrapper[4775]: I0123 14:32:12.346053 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76b301e2-214f-47ac-99b1-2cc76488c253-config-data" (OuterVolumeSpecName: "config-data") pod "76b301e2-214f-47ac-99b1-2cc76488c253" (UID: "76b301e2-214f-47ac-99b1-2cc76488c253"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:32:12 crc kubenswrapper[4775]: I0123 14:32:12.418276 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trfqx\" (UniqueName: \"kubernetes.io/projected/76b301e2-214f-47ac-99b1-2cc76488c253-kube-api-access-trfqx\") on node \"crc\" DevicePath \"\"" Jan 23 14:32:12 crc kubenswrapper[4775]: I0123 14:32:12.418334 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76b301e2-214f-47ac-99b1-2cc76488c253-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:32:12 crc kubenswrapper[4775]: I0123 14:32:12.982734 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"08cc29e8-1d83-4f1e-b343-a813a06c7f5a","Type":"ContainerStarted","Data":"2ec2d8ee517098a55339c83b7adf972f94f667aba8e7519f92926f2a080db62e"} Jan 23 14:32:12 crc kubenswrapper[4775]: I0123 14:32:12.982850 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"08cc29e8-1d83-4f1e-b343-a813a06c7f5a","Type":"ContainerStarted","Data":"64ad254d6ba4ee3740ce23f48d5a83bfdac9d38cd1e51e005d44e141074beaa9"} Jan 23 14:32:12 crc kubenswrapper[4775]: I0123 14:32:12.986775 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"76b301e2-214f-47ac-99b1-2cc76488c253","Type":"ContainerDied","Data":"c44d31ef626505fe64787a5927f46c125139a4b911b2ce50c292d1e7a21655a0"} Jan 23 14:32:12 crc kubenswrapper[4775]: I0123 14:32:12.986830 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:32:12 crc kubenswrapper[4775]: I0123 14:32:12.986873 4775 scope.go:117] "RemoveContainer" containerID="3b5691326bb0d178b840a8fa1eaea852a75d863f4e1bb47f05120caae1fc9a32" Jan 23 14:32:12 crc kubenswrapper[4775]: I0123 14:32:12.994953 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"8da8e70a-bee6-4082-a0c5-8419ea3f86a6","Type":"ContainerStarted","Data":"03dae20f5ec29320c7fe34119020ccbc13c7cae126690fd030e309307e495762"} Jan 23 14:32:12 crc kubenswrapper[4775]: I0123 14:32:12.995025 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"8da8e70a-bee6-4082-a0c5-8419ea3f86a6","Type":"ContainerStarted","Data":"c0f199e96e42ee98742c70e0f678217496127272f948f51e4ea5ea7a1c513f05"} Jan 23 14:32:12 crc kubenswrapper[4775]: I0123 14:32:12.995049 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"8da8e70a-bee6-4082-a0c5-8419ea3f86a6","Type":"ContainerStarted","Data":"5bbd58bc5eb6780b68e8d968266f41a0b7126273d93210d99f32930850e03151"} Jan 23 14:32:13 crc kubenswrapper[4775]: I0123 14:32:13.019534 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=3.019495626 podStartE2EDuration="3.019495626s" podCreationTimestamp="2026-01-23 14:32:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:32:13.014499643 +0000 UTC m=+1680.009328373" watchObservedRunningTime="2026-01-23 14:32:13.019495626 +0000 UTC m=+1680.014324406" Jan 23 14:32:13 crc kubenswrapper[4775]: I0123 14:32:13.052778 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=3.052743055 podStartE2EDuration="3.052743055s" podCreationTimestamp="2026-01-23 14:32:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:32:13.050648799 +0000 UTC m=+1680.045477549" watchObservedRunningTime="2026-01-23 14:32:13.052743055 +0000 UTC m=+1680.047571835" Jan 23 14:32:13 crc kubenswrapper[4775]: I0123 14:32:13.083031 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:32:13 crc kubenswrapper[4775]: I0123 14:32:13.088393 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:32:13 crc kubenswrapper[4775]: I0123 14:32:13.124919 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:32:13 crc kubenswrapper[4775]: E0123 14:32:13.125438 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76b301e2-214f-47ac-99b1-2cc76488c253" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:32:13 crc kubenswrapper[4775]: I0123 14:32:13.125461 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="76b301e2-214f-47ac-99b1-2cc76488c253" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:32:13 crc kubenswrapper[4775]: I0123 14:32:13.125685 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="76b301e2-214f-47ac-99b1-2cc76488c253" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:32:13 crc kubenswrapper[4775]: I0123 14:32:13.126470 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:32:13 crc kubenswrapper[4775]: I0123 14:32:13.130342 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 23 14:32:13 crc kubenswrapper[4775]: I0123 14:32:13.131555 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:32:13 crc kubenswrapper[4775]: I0123 14:32:13.230941 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/daaf7413-398a-4a39-a375-c130187f9726-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"daaf7413-398a-4a39-a375-c130187f9726\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:32:13 crc kubenswrapper[4775]: I0123 14:32:13.231005 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5zwr\" (UniqueName: \"kubernetes.io/projected/daaf7413-398a-4a39-a375-c130187f9726-kube-api-access-r5zwr\") pod \"nova-kuttl-scheduler-0\" (UID: \"daaf7413-398a-4a39-a375-c130187f9726\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:32:13 crc kubenswrapper[4775]: I0123 14:32:13.332979 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/daaf7413-398a-4a39-a375-c130187f9726-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"daaf7413-398a-4a39-a375-c130187f9726\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:32:13 crc kubenswrapper[4775]: I0123 14:32:13.333107 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5zwr\" (UniqueName: \"kubernetes.io/projected/daaf7413-398a-4a39-a375-c130187f9726-kube-api-access-r5zwr\") pod \"nova-kuttl-scheduler-0\" (UID: \"daaf7413-398a-4a39-a375-c130187f9726\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:32:13 crc kubenswrapper[4775]: I0123 14:32:13.339021 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/daaf7413-398a-4a39-a375-c130187f9726-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"daaf7413-398a-4a39-a375-c130187f9726\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:32:13 crc kubenswrapper[4775]: I0123 14:32:13.360684 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5zwr\" (UniqueName: \"kubernetes.io/projected/daaf7413-398a-4a39-a375-c130187f9726-kube-api-access-r5zwr\") pod \"nova-kuttl-scheduler-0\" (UID: \"daaf7413-398a-4a39-a375-c130187f9726\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:32:13 crc kubenswrapper[4775]: I0123 14:32:13.452382 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:32:13 crc kubenswrapper[4775]: I0123 14:32:13.726870 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76b301e2-214f-47ac-99b1-2cc76488c253" path="/var/lib/kubelet/pods/76b301e2-214f-47ac-99b1-2cc76488c253/volumes" Jan 23 14:32:13 crc kubenswrapper[4775]: I0123 14:32:13.953880 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:32:13 crc kubenswrapper[4775]: W0123 14:32:13.959624 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddaaf7413_398a_4a39_a375_c130187f9726.slice/crio-97fad5da4691bcf418d5d7014464949a4751476840d2d4bd08f07e42875a279d WatchSource:0}: Error finding container 97fad5da4691bcf418d5d7014464949a4751476840d2d4bd08f07e42875a279d: Status 404 returned error can't find the container with id 97fad5da4691bcf418d5d7014464949a4751476840d2d4bd08f07e42875a279d Jan 23 14:32:14 crc kubenswrapper[4775]: I0123 14:32:14.009070 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"daaf7413-398a-4a39-a375-c130187f9726","Type":"ContainerStarted","Data":"97fad5da4691bcf418d5d7014464949a4751476840d2d4bd08f07e42875a279d"} Jan 23 14:32:14 crc kubenswrapper[4775]: I0123 14:32:14.713894 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:32:14 crc kubenswrapper[4775]: E0123 14:32:14.714120 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:32:15 crc kubenswrapper[4775]: I0123 14:32:15.020633 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"daaf7413-398a-4a39-a375-c130187f9726","Type":"ContainerStarted","Data":"3ba5fc19235d3db712a04f428f14e623c0a46cd37e971af89d028a76dc93187a"} Jan 23 14:32:15 crc kubenswrapper[4775]: I0123 14:32:15.048496 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=2.048472975 podStartE2EDuration="2.048472975s" podCreationTimestamp="2026-01-23 14:32:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:32:15.04269736 +0000 UTC m=+1682.037526130" watchObservedRunningTime="2026-01-23 14:32:15.048472975 +0000 UTC m=+1682.043301755" Jan 23 14:32:16 crc kubenswrapper[4775]: I0123 14:32:16.380611 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:16 crc kubenswrapper[4775]: I0123 14:32:16.380692 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:18 crc kubenswrapper[4775]: I0123 14:32:18.453361 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:32:21 crc kubenswrapper[4775]: I0123 14:32:21.380703 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:21 crc kubenswrapper[4775]: I0123 14:32:21.381146 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:21 crc kubenswrapper[4775]: I0123 14:32:21.396896 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:21 crc kubenswrapper[4775]: I0123 14:32:21.396963 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:22 crc kubenswrapper[4775]: I0123 14:32:22.504028 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="8da8e70a-bee6-4082-a0c5-8419ea3f86a6" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.164:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:32:22 crc kubenswrapper[4775]: I0123 14:32:22.545038 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="08cc29e8-1d83-4f1e-b343-a813a06c7f5a" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.163:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:32:22 crc kubenswrapper[4775]: I0123 14:32:22.545038 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="08cc29e8-1d83-4f1e-b343-a813a06c7f5a" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.163:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:32:22 crc kubenswrapper[4775]: I0123 14:32:22.545095 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="8da8e70a-bee6-4082-a0c5-8419ea3f86a6" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.164:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:32:23 crc kubenswrapper[4775]: I0123 14:32:23.453185 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:32:23 crc kubenswrapper[4775]: I0123 14:32:23.497907 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:32:24 crc kubenswrapper[4775]: I0123 14:32:24.177832 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:32:25 crc kubenswrapper[4775]: I0123 14:32:25.714761 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:32:25 crc kubenswrapper[4775]: E0123 14:32:25.715324 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:32:31 crc kubenswrapper[4775]: I0123 14:32:31.389061 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:31 crc kubenswrapper[4775]: I0123 14:32:31.403664 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:31 crc kubenswrapper[4775]: I0123 14:32:31.406881 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:31 crc kubenswrapper[4775]: I0123 14:32:31.409036 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:31 crc kubenswrapper[4775]: I0123 14:32:31.409112 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:31 crc kubenswrapper[4775]: I0123 14:32:31.409424 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:31 crc kubenswrapper[4775]: I0123 14:32:31.409484 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:31 crc kubenswrapper[4775]: I0123 14:32:31.411373 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:31 crc kubenswrapper[4775]: I0123 14:32:31.411609 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:32:32 crc kubenswrapper[4775]: I0123 14:32:32.217975 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.624654 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-2"] Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.626241 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.639920 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-1"] Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.641532 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.644393 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a771c767-804b-4c42-bfc9-e6982acea366-config-data\") pod \"nova-kuttl-api-2\" (UID: \"a771c767-804b-4c42-bfc9-e6982acea366\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.644448 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a771c767-804b-4c42-bfc9-e6982acea366-logs\") pod \"nova-kuttl-api-2\" (UID: \"a771c767-804b-4c42-bfc9-e6982acea366\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.644537 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckrlr\" (UniqueName: \"kubernetes.io/projected/a771c767-804b-4c42-bfc9-e6982acea366-kube-api-access-ckrlr\") pod \"nova-kuttl-api-2\" (UID: \"a771c767-804b-4c42-bfc9-e6982acea366\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.675926 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-2"] Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.703350 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-1"] Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.745752 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a771c767-804b-4c42-bfc9-e6982acea366-config-data\") pod \"nova-kuttl-api-2\" (UID: \"a771c767-804b-4c42-bfc9-e6982acea366\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.746140 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a771c767-804b-4c42-bfc9-e6982acea366-logs\") pod \"nova-kuttl-api-2\" (UID: \"a771c767-804b-4c42-bfc9-e6982acea366\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.746453 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckrlr\" (UniqueName: \"kubernetes.io/projected/a771c767-804b-4c42-bfc9-e6982acea366-kube-api-access-ckrlr\") pod \"nova-kuttl-api-2\" (UID: \"a771c767-804b-4c42-bfc9-e6982acea366\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.746669 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45pc5\" (UniqueName: \"kubernetes.io/projected/f3a307d6-651f-4f43-83ec-6d1e1118f7ad-kube-api-access-45pc5\") pod \"nova-kuttl-api-1\" (UID: \"f3a307d6-651f-4f43-83ec-6d1e1118f7ad\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.746982 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3a307d6-651f-4f43-83ec-6d1e1118f7ad-logs\") pod \"nova-kuttl-api-1\" (UID: \"f3a307d6-651f-4f43-83ec-6d1e1118f7ad\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.747032 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3a307d6-651f-4f43-83ec-6d1e1118f7ad-config-data\") pod \"nova-kuttl-api-1\" (UID: \"f3a307d6-651f-4f43-83ec-6d1e1118f7ad\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.747067 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a771c767-804b-4c42-bfc9-e6982acea366-logs\") pod \"nova-kuttl-api-2\" (UID: \"a771c767-804b-4c42-bfc9-e6982acea366\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.756476 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a771c767-804b-4c42-bfc9-e6982acea366-config-data\") pod \"nova-kuttl-api-2\" (UID: \"a771c767-804b-4c42-bfc9-e6982acea366\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.775940 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckrlr\" (UniqueName: \"kubernetes.io/projected/a771c767-804b-4c42-bfc9-e6982acea366-kube-api-access-ckrlr\") pod \"nova-kuttl-api-2\" (UID: \"a771c767-804b-4c42-bfc9-e6982acea366\") " pod="nova-kuttl-default/nova-kuttl-api-2" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.848541 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45pc5\" (UniqueName: \"kubernetes.io/projected/f3a307d6-651f-4f43-83ec-6d1e1118f7ad-kube-api-access-45pc5\") pod \"nova-kuttl-api-1\" (UID: \"f3a307d6-651f-4f43-83ec-6d1e1118f7ad\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.848607 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3a307d6-651f-4f43-83ec-6d1e1118f7ad-logs\") pod \"nova-kuttl-api-1\" (UID: \"f3a307d6-651f-4f43-83ec-6d1e1118f7ad\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.848629 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3a307d6-651f-4f43-83ec-6d1e1118f7ad-config-data\") pod \"nova-kuttl-api-1\" (UID: \"f3a307d6-651f-4f43-83ec-6d1e1118f7ad\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.849468 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3a307d6-651f-4f43-83ec-6d1e1118f7ad-logs\") pod \"nova-kuttl-api-1\" (UID: \"f3a307d6-651f-4f43-83ec-6d1e1118f7ad\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.855333 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3a307d6-651f-4f43-83ec-6d1e1118f7ad-config-data\") pod \"nova-kuttl-api-1\" (UID: \"f3a307d6-651f-4f43-83ec-6d1e1118f7ad\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.877492 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45pc5\" (UniqueName: \"kubernetes.io/projected/f3a307d6-651f-4f43-83ec-6d1e1118f7ad-kube-api-access-45pc5\") pod \"nova-kuttl-api-1\" (UID: \"f3a307d6-651f-4f43-83ec-6d1e1118f7ad\") " pod="nova-kuttl-default/nova-kuttl-api-1" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.917192 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-2"] Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.919006 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.920899 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-1"] Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.921887 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.934595 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-2"] Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.949614 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grnlj\" (UniqueName: \"kubernetes.io/projected/422f57ad-3c24-4af9-aa50-c17639a07403-kube-api-access-grnlj\") pod \"nova-kuttl-cell0-conductor-2\" (UID: \"422f57ad-3c24-4af9-aa50-c17639a07403\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.949668 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/422f57ad-3c24-4af9-aa50-c17639a07403-config-data\") pod \"nova-kuttl-cell0-conductor-2\" (UID: \"422f57ad-3c24-4af9-aa50-c17639a07403\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.949736 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93184515-7dbf-4aeb-823f-0146b2a66d39-config-data\") pod \"nova-kuttl-cell0-conductor-1\" (UID: \"93184515-7dbf-4aeb-823f-0146b2a66d39\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.949759 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-985tt\" (UniqueName: \"kubernetes.io/projected/93184515-7dbf-4aeb-823f-0146b2a66d39-kube-api-access-985tt\") pod \"nova-kuttl-cell0-conductor-1\" (UID: \"93184515-7dbf-4aeb-823f-0146b2a66d39\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.965774 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 23 14:32:33 crc kubenswrapper[4775]: I0123 14:32:33.969625 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-1"] Jan 23 14:32:34 crc kubenswrapper[4775]: I0123 14:32:34.008549 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 23 14:32:34 crc kubenswrapper[4775]: I0123 14:32:34.050719 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grnlj\" (UniqueName: \"kubernetes.io/projected/422f57ad-3c24-4af9-aa50-c17639a07403-kube-api-access-grnlj\") pod \"nova-kuttl-cell0-conductor-2\" (UID: \"422f57ad-3c24-4af9-aa50-c17639a07403\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 23 14:32:34 crc kubenswrapper[4775]: I0123 14:32:34.050784 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/422f57ad-3c24-4af9-aa50-c17639a07403-config-data\") pod \"nova-kuttl-cell0-conductor-2\" (UID: \"422f57ad-3c24-4af9-aa50-c17639a07403\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 23 14:32:34 crc kubenswrapper[4775]: I0123 14:32:34.050919 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93184515-7dbf-4aeb-823f-0146b2a66d39-config-data\") pod \"nova-kuttl-cell0-conductor-1\" (UID: \"93184515-7dbf-4aeb-823f-0146b2a66d39\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 23 14:32:34 crc kubenswrapper[4775]: I0123 14:32:34.050944 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-985tt\" (UniqueName: \"kubernetes.io/projected/93184515-7dbf-4aeb-823f-0146b2a66d39-kube-api-access-985tt\") pod \"nova-kuttl-cell0-conductor-1\" (UID: \"93184515-7dbf-4aeb-823f-0146b2a66d39\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 23 14:32:34 crc kubenswrapper[4775]: I0123 14:32:34.058429 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/422f57ad-3c24-4af9-aa50-c17639a07403-config-data\") pod \"nova-kuttl-cell0-conductor-2\" (UID: \"422f57ad-3c24-4af9-aa50-c17639a07403\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 23 14:32:34 crc kubenswrapper[4775]: I0123 14:32:34.058493 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93184515-7dbf-4aeb-823f-0146b2a66d39-config-data\") pod \"nova-kuttl-cell0-conductor-1\" (UID: \"93184515-7dbf-4aeb-823f-0146b2a66d39\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 23 14:32:34 crc kubenswrapper[4775]: I0123 14:32:34.074281 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-985tt\" (UniqueName: \"kubernetes.io/projected/93184515-7dbf-4aeb-823f-0146b2a66d39-kube-api-access-985tt\") pod \"nova-kuttl-cell0-conductor-1\" (UID: \"93184515-7dbf-4aeb-823f-0146b2a66d39\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 23 14:32:34 crc kubenswrapper[4775]: I0123 14:32:34.075576 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grnlj\" (UniqueName: \"kubernetes.io/projected/422f57ad-3c24-4af9-aa50-c17639a07403-kube-api-access-grnlj\") pod \"nova-kuttl-cell0-conductor-2\" (UID: \"422f57ad-3c24-4af9-aa50-c17639a07403\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 23 14:32:34 crc kubenswrapper[4775]: I0123 14:32:34.262359 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 23 14:32:34 crc kubenswrapper[4775]: I0123 14:32:34.281545 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 23 14:32:34 crc kubenswrapper[4775]: I0123 14:32:34.424556 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-2"] Jan 23 14:32:34 crc kubenswrapper[4775]: W0123 14:32:34.428920 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda771c767_804b_4c42_bfc9_e6982acea366.slice/crio-53f1805ce8ed107c85194e2afb2fe0fc7531107d1bd37dd54eace53ff7e081e3 WatchSource:0}: Error finding container 53f1805ce8ed107c85194e2afb2fe0fc7531107d1bd37dd54eace53ff7e081e3: Status 404 returned error can't find the container with id 53f1805ce8ed107c85194e2afb2fe0fc7531107d1bd37dd54eace53ff7e081e3 Jan 23 14:32:34 crc kubenswrapper[4775]: I0123 14:32:34.465874 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-1"] Jan 23 14:32:34 crc kubenswrapper[4775]: W0123 14:32:34.491000 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3a307d6_651f_4f43_83ec_6d1e1118f7ad.slice/crio-2dcf48bbe2320b010b20887999fecd4308d678fb9880db6769d32c78a7d14c47 WatchSource:0}: Error finding container 2dcf48bbe2320b010b20887999fecd4308d678fb9880db6769d32c78a7d14c47: Status 404 returned error can't find the container with id 2dcf48bbe2320b010b20887999fecd4308d678fb9880db6769d32c78a7d14c47 Jan 23 14:32:34 crc kubenswrapper[4775]: I0123 14:32:34.597280 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-1"] Jan 23 14:32:34 crc kubenswrapper[4775]: I0123 14:32:34.741681 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-2"] Jan 23 14:32:34 crc kubenswrapper[4775]: W0123 14:32:34.744252 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod422f57ad_3c24_4af9_aa50_c17639a07403.slice/crio-13f3d5061361bcece8ecd154ec4ce1dd8f57aa77665423267627e59266ce27ed WatchSource:0}: Error finding container 13f3d5061361bcece8ecd154ec4ce1dd8f57aa77665423267627e59266ce27ed: Status 404 returned error can't find the container with id 13f3d5061361bcece8ecd154ec4ce1dd8f57aa77665423267627e59266ce27ed Jan 23 14:32:35 crc kubenswrapper[4775]: I0123 14:32:35.267704 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-1" event={"ID":"f3a307d6-651f-4f43-83ec-6d1e1118f7ad","Type":"ContainerStarted","Data":"44b9d1efe3a792aaf862c7a3c79d1c143a7ce73765ff2de4b10ccbe7c4d3edbb"} Jan 23 14:32:35 crc kubenswrapper[4775]: I0123 14:32:35.268183 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-1" event={"ID":"f3a307d6-651f-4f43-83ec-6d1e1118f7ad","Type":"ContainerStarted","Data":"9f217b6d27d3178707e2d3c8f04dc73a49c41c68988a537f0e9b988da1e4a797"} Jan 23 14:32:35 crc kubenswrapper[4775]: I0123 14:32:35.268201 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-1" event={"ID":"f3a307d6-651f-4f43-83ec-6d1e1118f7ad","Type":"ContainerStarted","Data":"2dcf48bbe2320b010b20887999fecd4308d678fb9880db6769d32c78a7d14c47"} Jan 23 14:32:35 crc kubenswrapper[4775]: I0123 14:32:35.270096 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" event={"ID":"422f57ad-3c24-4af9-aa50-c17639a07403","Type":"ContainerStarted","Data":"0059bf4c06697e64e01608439a11541844aa72b36b84e23abc3ad0bcb9f4abe1"} Jan 23 14:32:35 crc kubenswrapper[4775]: I0123 14:32:35.270157 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" event={"ID":"422f57ad-3c24-4af9-aa50-c17639a07403","Type":"ContainerStarted","Data":"13f3d5061361bcece8ecd154ec4ce1dd8f57aa77665423267627e59266ce27ed"} Jan 23 14:32:35 crc kubenswrapper[4775]: I0123 14:32:35.270495 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 23 14:32:35 crc kubenswrapper[4775]: I0123 14:32:35.274877 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-2" event={"ID":"a771c767-804b-4c42-bfc9-e6982acea366","Type":"ContainerStarted","Data":"c52e98be36b5967e54703c69ce278883c27920df3afb501eb24d5bdc613b7994"} Jan 23 14:32:35 crc kubenswrapper[4775]: I0123 14:32:35.274948 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-2" event={"ID":"a771c767-804b-4c42-bfc9-e6982acea366","Type":"ContainerStarted","Data":"e07000a69e2d7dc25840cd7ce274cd0030fac44b2c6a54f98b1b488652900399"} Jan 23 14:32:35 crc kubenswrapper[4775]: I0123 14:32:35.274978 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-2" event={"ID":"a771c767-804b-4c42-bfc9-e6982acea366","Type":"ContainerStarted","Data":"53f1805ce8ed107c85194e2afb2fe0fc7531107d1bd37dd54eace53ff7e081e3"} Jan 23 14:32:35 crc kubenswrapper[4775]: I0123 14:32:35.278497 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" event={"ID":"93184515-7dbf-4aeb-823f-0146b2a66d39","Type":"ContainerStarted","Data":"55b86529841f749494f871e3c1c9f9261bb198c398af7c06b847289681d88eec"} Jan 23 14:32:35 crc kubenswrapper[4775]: I0123 14:32:35.278534 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" event={"ID":"93184515-7dbf-4aeb-823f-0146b2a66d39","Type":"ContainerStarted","Data":"7f633d05d3eeb44bacac1fe7b01d7340207dd706030a31990ae0908b4cb1ede1"} Jan 23 14:32:35 crc kubenswrapper[4775]: I0123 14:32:35.278640 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 23 14:32:35 crc kubenswrapper[4775]: I0123 14:32:35.298647 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-1" podStartSLOduration=2.298620623 podStartE2EDuration="2.298620623s" podCreationTimestamp="2026-01-23 14:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:32:35.29317683 +0000 UTC m=+1702.288005590" watchObservedRunningTime="2026-01-23 14:32:35.298620623 +0000 UTC m=+1702.293449363" Jan 23 14:32:35 crc kubenswrapper[4775]: I0123 14:32:35.319493 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" podStartSLOduration=2.319473872 podStartE2EDuration="2.319473872s" podCreationTimestamp="2026-01-23 14:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:32:35.309142931 +0000 UTC m=+1702.303971671" watchObservedRunningTime="2026-01-23 14:32:35.319473872 +0000 UTC m=+1702.314302622" Jan 23 14:32:35 crc kubenswrapper[4775]: I0123 14:32:35.327639 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" podStartSLOduration=2.327619792 podStartE2EDuration="2.327619792s" podCreationTimestamp="2026-01-23 14:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:32:35.326529211 +0000 UTC m=+1702.321357971" watchObservedRunningTime="2026-01-23 14:32:35.327619792 +0000 UTC m=+1702.322448532" Jan 23 14:32:35 crc kubenswrapper[4775]: I0123 14:32:35.355096 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-2" podStartSLOduration=2.355076387 podStartE2EDuration="2.355076387s" podCreationTimestamp="2026-01-23 14:32:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:32:35.349305724 +0000 UTC m=+1702.344134464" watchObservedRunningTime="2026-01-23 14:32:35.355076387 +0000 UTC m=+1702.349905137" Jan 23 14:32:37 crc kubenswrapper[4775]: I0123 14:32:37.714963 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:32:37 crc kubenswrapper[4775]: E0123 14:32:37.715702 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:32:39 crc kubenswrapper[4775]: I0123 14:32:39.312795 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 23 14:32:39 crc kubenswrapper[4775]: I0123 14:32:39.334499 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.641278 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-1"] Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.643720 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.649026 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-2"] Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.650090 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.668512 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-1"] Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.678041 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-2"] Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.771535 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-1"] Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.773343 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.785886 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-2"] Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.791990 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.794196 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqdpl\" (UniqueName: \"kubernetes.io/projected/ec05960b-b36c-408b-af7e-3b5b312882fc-kube-api-access-tqdpl\") pod \"nova-kuttl-scheduler-1\" (UID: \"ec05960b-b36c-408b-af7e-3b5b312882fc\") " pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.794696 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3429d990-e795-4241-bb25-8871be747a75-config-data\") pod \"nova-kuttl-scheduler-2\" (UID: \"3429d990-e795-4241-bb25-8871be747a75\") " pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.794757 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec05960b-b36c-408b-af7e-3b5b312882fc-config-data\") pod \"nova-kuttl-scheduler-1\" (UID: \"ec05960b-b36c-408b-af7e-3b5b312882fc\") " pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.794862 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4ghk\" (UniqueName: \"kubernetes.io/projected/3429d990-e795-4241-bb25-8871be747a75-kube-api-access-v4ghk\") pod \"nova-kuttl-scheduler-2\" (UID: \"3429d990-e795-4241-bb25-8871be747a75\") " pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.814635 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-1"] Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.828883 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-2"] Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.896290 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec05960b-b36c-408b-af7e-3b5b312882fc-config-data\") pod \"nova-kuttl-scheduler-1\" (UID: \"ec05960b-b36c-408b-af7e-3b5b312882fc\") " pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.896373 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93e53da4-e769-460a-b299-07131d928b83-logs\") pod \"nova-kuttl-metadata-2\" (UID: \"93e53da4-e769-460a-b299-07131d928b83\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.896452 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbjn6\" (UniqueName: \"kubernetes.io/projected/1f8451d7-e2c8-4d37-838f-b5042ceabc86-kube-api-access-xbjn6\") pod \"nova-kuttl-metadata-1\" (UID: \"1f8451d7-e2c8-4d37-838f-b5042ceabc86\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.896482 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4ghk\" (UniqueName: \"kubernetes.io/projected/3429d990-e795-4241-bb25-8871be747a75-kube-api-access-v4ghk\") pod \"nova-kuttl-scheduler-2\" (UID: \"3429d990-e795-4241-bb25-8871be747a75\") " pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.896527 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93e53da4-e769-460a-b299-07131d928b83-config-data\") pod \"nova-kuttl-metadata-2\" (UID: \"93e53da4-e769-460a-b299-07131d928b83\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.896585 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqdpl\" (UniqueName: \"kubernetes.io/projected/ec05960b-b36c-408b-af7e-3b5b312882fc-kube-api-access-tqdpl\") pod \"nova-kuttl-scheduler-1\" (UID: \"ec05960b-b36c-408b-af7e-3b5b312882fc\") " pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.896665 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f8451d7-e2c8-4d37-838f-b5042ceabc86-logs\") pod \"nova-kuttl-metadata-1\" (UID: \"1f8451d7-e2c8-4d37-838f-b5042ceabc86\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.896706 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3429d990-e795-4241-bb25-8871be747a75-config-data\") pod \"nova-kuttl-scheduler-2\" (UID: \"3429d990-e795-4241-bb25-8871be747a75\") " pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.896752 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8451d7-e2c8-4d37-838f-b5042ceabc86-config-data\") pod \"nova-kuttl-metadata-1\" (UID: \"1f8451d7-e2c8-4d37-838f-b5042ceabc86\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.896786 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vthpv\" (UniqueName: \"kubernetes.io/projected/93e53da4-e769-460a-b299-07131d928b83-kube-api-access-vthpv\") pod \"nova-kuttl-metadata-2\" (UID: \"93e53da4-e769-460a-b299-07131d928b83\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.904864 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec05960b-b36c-408b-af7e-3b5b312882fc-config-data\") pod \"nova-kuttl-scheduler-1\" (UID: \"ec05960b-b36c-408b-af7e-3b5b312882fc\") " pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.908376 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3429d990-e795-4241-bb25-8871be747a75-config-data\") pod \"nova-kuttl-scheduler-2\" (UID: \"3429d990-e795-4241-bb25-8871be747a75\") " pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.917386 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqdpl\" (UniqueName: \"kubernetes.io/projected/ec05960b-b36c-408b-af7e-3b5b312882fc-kube-api-access-tqdpl\") pod \"nova-kuttl-scheduler-1\" (UID: \"ec05960b-b36c-408b-af7e-3b5b312882fc\") " pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.918325 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4ghk\" (UniqueName: \"kubernetes.io/projected/3429d990-e795-4241-bb25-8871be747a75-kube-api-access-v4ghk\") pod \"nova-kuttl-scheduler-2\" (UID: \"3429d990-e795-4241-bb25-8871be747a75\") " pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.998380 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbjn6\" (UniqueName: \"kubernetes.io/projected/1f8451d7-e2c8-4d37-838f-b5042ceabc86-kube-api-access-xbjn6\") pod \"nova-kuttl-metadata-1\" (UID: \"1f8451d7-e2c8-4d37-838f-b5042ceabc86\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.998428 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93e53da4-e769-460a-b299-07131d928b83-config-data\") pod \"nova-kuttl-metadata-2\" (UID: \"93e53da4-e769-460a-b299-07131d928b83\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.998488 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f8451d7-e2c8-4d37-838f-b5042ceabc86-logs\") pod \"nova-kuttl-metadata-1\" (UID: \"1f8451d7-e2c8-4d37-838f-b5042ceabc86\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.998523 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8451d7-e2c8-4d37-838f-b5042ceabc86-config-data\") pod \"nova-kuttl-metadata-1\" (UID: \"1f8451d7-e2c8-4d37-838f-b5042ceabc86\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.998541 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vthpv\" (UniqueName: \"kubernetes.io/projected/93e53da4-e769-460a-b299-07131d928b83-kube-api-access-vthpv\") pod \"nova-kuttl-metadata-2\" (UID: \"93e53da4-e769-460a-b299-07131d928b83\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.998567 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93e53da4-e769-460a-b299-07131d928b83-logs\") pod \"nova-kuttl-metadata-2\" (UID: \"93e53da4-e769-460a-b299-07131d928b83\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.998957 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93e53da4-e769-460a-b299-07131d928b83-logs\") pod \"nova-kuttl-metadata-2\" (UID: \"93e53da4-e769-460a-b299-07131d928b83\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 23 14:32:40 crc kubenswrapper[4775]: I0123 14:32:40.999485 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f8451d7-e2c8-4d37-838f-b5042ceabc86-logs\") pod \"nova-kuttl-metadata-1\" (UID: \"1f8451d7-e2c8-4d37-838f-b5042ceabc86\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.003932 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93e53da4-e769-460a-b299-07131d928b83-config-data\") pod \"nova-kuttl-metadata-2\" (UID: \"93e53da4-e769-460a-b299-07131d928b83\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.004749 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.004936 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8451d7-e2c8-4d37-838f-b5042ceabc86-config-data\") pod \"nova-kuttl-metadata-1\" (UID: \"1f8451d7-e2c8-4d37-838f-b5042ceabc86\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.011620 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.026618 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vthpv\" (UniqueName: \"kubernetes.io/projected/93e53da4-e769-460a-b299-07131d928b83-kube-api-access-vthpv\") pod \"nova-kuttl-metadata-2\" (UID: \"93e53da4-e769-460a-b299-07131d928b83\") " pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.030717 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbjn6\" (UniqueName: \"kubernetes.io/projected/1f8451d7-e2c8-4d37-838f-b5042ceabc86-kube-api-access-xbjn6\") pod \"nova-kuttl-metadata-1\" (UID: \"1f8451d7-e2c8-4d37-838f-b5042ceabc86\") " pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.119084 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.119519 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.495945 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-1"] Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.557557 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-2"] Jan 23 14:32:41 crc kubenswrapper[4775]: W0123 14:32:41.560260 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3429d990_e795_4241_bb25_8871be747a75.slice/crio-69f48396ed04cbb0dab3b9624bbf068a6a9d790e274fbfcc9a62a2e58dc96c61 WatchSource:0}: Error finding container 69f48396ed04cbb0dab3b9624bbf068a6a9d790e274fbfcc9a62a2e58dc96c61: Status 404 returned error can't find the container with id 69f48396ed04cbb0dab3b9624bbf068a6a9d790e274fbfcc9a62a2e58dc96c61 Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.630588 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-2"] Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.632313 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.651671 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-1"] Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.653170 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.665092 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-2"] Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.683680 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-1"] Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.702586 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-2"] Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.725164 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-1"] Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.816124 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b-config-data\") pod \"nova-kuttl-cell1-conductor-1\" (UID: \"899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.816340 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd9699c7-620b-45ed-9acf-d8d68558592a-config-data\") pod \"nova-kuttl-cell1-conductor-2\" (UID: \"cd9699c7-620b-45ed-9acf-d8d68558592a\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.816412 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csm4d\" (UniqueName: \"kubernetes.io/projected/899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b-kube-api-access-csm4d\") pod \"nova-kuttl-cell1-conductor-1\" (UID: \"899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.816528 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm4rh\" (UniqueName: \"kubernetes.io/projected/cd9699c7-620b-45ed-9acf-d8d68558592a-kube-api-access-qm4rh\") pod \"nova-kuttl-cell1-conductor-2\" (UID: \"cd9699c7-620b-45ed-9acf-d8d68558592a\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.917787 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b-config-data\") pod \"nova-kuttl-cell1-conductor-1\" (UID: \"899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.917959 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd9699c7-620b-45ed-9acf-d8d68558592a-config-data\") pod \"nova-kuttl-cell1-conductor-2\" (UID: \"cd9699c7-620b-45ed-9acf-d8d68558592a\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.918012 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csm4d\" (UniqueName: \"kubernetes.io/projected/899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b-kube-api-access-csm4d\") pod \"nova-kuttl-cell1-conductor-1\" (UID: \"899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.918044 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm4rh\" (UniqueName: \"kubernetes.io/projected/cd9699c7-620b-45ed-9acf-d8d68558592a-kube-api-access-qm4rh\") pod \"nova-kuttl-cell1-conductor-2\" (UID: \"cd9699c7-620b-45ed-9acf-d8d68558592a\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.923223 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b-config-data\") pod \"nova-kuttl-cell1-conductor-1\" (UID: \"899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.923829 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd9699c7-620b-45ed-9acf-d8d68558592a-config-data\") pod \"nova-kuttl-cell1-conductor-2\" (UID: \"cd9699c7-620b-45ed-9acf-d8d68558592a\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.933979 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csm4d\" (UniqueName: \"kubernetes.io/projected/899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b-kube-api-access-csm4d\") pod \"nova-kuttl-cell1-conductor-1\" (UID: \"899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.943158 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm4rh\" (UniqueName: \"kubernetes.io/projected/cd9699c7-620b-45ed-9acf-d8d68558592a-kube-api-access-qm4rh\") pod \"nova-kuttl-cell1-conductor-2\" (UID: \"cd9699c7-620b-45ed-9acf-d8d68558592a\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.965449 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 23 14:32:41 crc kubenswrapper[4775]: I0123 14:32:41.987391 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 23 14:32:42 crc kubenswrapper[4775]: I0123 14:32:42.344525 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-2" event={"ID":"93e53da4-e769-460a-b299-07131d928b83","Type":"ContainerStarted","Data":"d3a4946fb4d2fe2a9a5683281e70f94df6d1c65d02c5f7cebaeee17f058e4a65"} Jan 23 14:32:42 crc kubenswrapper[4775]: I0123 14:32:42.345050 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-2" event={"ID":"93e53da4-e769-460a-b299-07131d928b83","Type":"ContainerStarted","Data":"a22aeb3b9de2421ab074e1ffccb7be34ee4fb1066458792f0e03da32e3b371bc"} Jan 23 14:32:42 crc kubenswrapper[4775]: I0123 14:32:42.345061 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-2" event={"ID":"93e53da4-e769-460a-b299-07131d928b83","Type":"ContainerStarted","Data":"60356a4b31a8069b59d95c548c20dccce95f5173efd6d074a66247e83d02c3f3"} Jan 23 14:32:42 crc kubenswrapper[4775]: I0123 14:32:42.368520 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-2" event={"ID":"3429d990-e795-4241-bb25-8871be747a75","Type":"ContainerStarted","Data":"9a528ef489a9b4b96d6b67753c9ecaca53ab635c1fd73d9b5c4711af0a4c42da"} Jan 23 14:32:42 crc kubenswrapper[4775]: I0123 14:32:42.368564 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-2" event={"ID":"3429d990-e795-4241-bb25-8871be747a75","Type":"ContainerStarted","Data":"69f48396ed04cbb0dab3b9624bbf068a6a9d790e274fbfcc9a62a2e58dc96c61"} Jan 23 14:32:42 crc kubenswrapper[4775]: I0123 14:32:42.372636 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-2" podStartSLOduration=2.372622554 podStartE2EDuration="2.372622554s" podCreationTimestamp="2026-01-23 14:32:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:32:42.360384729 +0000 UTC m=+1709.355213479" watchObservedRunningTime="2026-01-23 14:32:42.372622554 +0000 UTC m=+1709.367451294" Jan 23 14:32:42 crc kubenswrapper[4775]: I0123 14:32:42.376355 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-1" event={"ID":"ec05960b-b36c-408b-af7e-3b5b312882fc","Type":"ContainerStarted","Data":"736d171dedb2f0da8c4e5fe544bfa3a87f50cc86a3bce080473d5aa3898538f2"} Jan 23 14:32:42 crc kubenswrapper[4775]: I0123 14:32:42.376456 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-1" event={"ID":"ec05960b-b36c-408b-af7e-3b5b312882fc","Type":"ContainerStarted","Data":"b8e794aac4ed73289d855382e742fffe4df1b0a69c82118afa283730ec3b3a07"} Jan 23 14:32:42 crc kubenswrapper[4775]: I0123 14:32:42.381927 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-1" event={"ID":"1f8451d7-e2c8-4d37-838f-b5042ceabc86","Type":"ContainerStarted","Data":"99eb7e2e686344d06bacb14b07ca9db3cf66056ae2537d284693419e0f8c15e3"} Jan 23 14:32:42 crc kubenswrapper[4775]: I0123 14:32:42.381973 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-1" event={"ID":"1f8451d7-e2c8-4d37-838f-b5042ceabc86","Type":"ContainerStarted","Data":"8bb734600e802f925272d19ee91b082bc20a92958621709db8bcda1373be8cd5"} Jan 23 14:32:42 crc kubenswrapper[4775]: I0123 14:32:42.381984 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-1" event={"ID":"1f8451d7-e2c8-4d37-838f-b5042ceabc86","Type":"ContainerStarted","Data":"915f8179b0a7a6e696f78019ea25b2951e8b526105585716102ad10d5d921fdc"} Jan 23 14:32:42 crc kubenswrapper[4775]: I0123 14:32:42.385901 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-2" podStartSLOduration=2.385846787 podStartE2EDuration="2.385846787s" podCreationTimestamp="2026-01-23 14:32:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:32:42.383142911 +0000 UTC m=+1709.377971651" watchObservedRunningTime="2026-01-23 14:32:42.385846787 +0000 UTC m=+1709.380675527" Jan 23 14:32:42 crc kubenswrapper[4775]: I0123 14:32:42.398213 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-1" podStartSLOduration=2.398197486 podStartE2EDuration="2.398197486s" podCreationTimestamp="2026-01-23 14:32:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:32:42.395440238 +0000 UTC m=+1709.390268978" watchObservedRunningTime="2026-01-23 14:32:42.398197486 +0000 UTC m=+1709.393026226" Jan 23 14:32:42 crc kubenswrapper[4775]: I0123 14:32:42.429995 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-2"] Jan 23 14:32:42 crc kubenswrapper[4775]: I0123 14:32:42.494955 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-1" podStartSLOduration=2.494921256 podStartE2EDuration="2.494921256s" podCreationTimestamp="2026-01-23 14:32:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:32:42.43482149 +0000 UTC m=+1709.429650230" watchObservedRunningTime="2026-01-23 14:32:42.494921256 +0000 UTC m=+1709.489750016" Jan 23 14:32:42 crc kubenswrapper[4775]: I0123 14:32:42.497442 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-1"] Jan 23 14:32:43 crc kubenswrapper[4775]: I0123 14:32:43.400466 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" event={"ID":"cd9699c7-620b-45ed-9acf-d8d68558592a","Type":"ContainerStarted","Data":"a7f5a876f1b2c9412ba3369766da30eec726860b52b560828567dc91661b80f6"} Jan 23 14:32:43 crc kubenswrapper[4775]: I0123 14:32:43.400961 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" event={"ID":"cd9699c7-620b-45ed-9acf-d8d68558592a","Type":"ContainerStarted","Data":"8b77f3af6b0f185a5161fdaa5749b6a9f045ed71bd64a9e9cc6ddd8f8cc700d4"} Jan 23 14:32:43 crc kubenswrapper[4775]: I0123 14:32:43.403996 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 23 14:32:43 crc kubenswrapper[4775]: I0123 14:32:43.419263 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" event={"ID":"899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b","Type":"ContainerStarted","Data":"3436442dbce900098e0a3b947a5679828fe40f90f8fc710a8899b8572b5ad5cc"} Jan 23 14:32:43 crc kubenswrapper[4775]: I0123 14:32:43.419355 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" event={"ID":"899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b","Type":"ContainerStarted","Data":"ff7647e6e88e0b90a60d3df6c11bd8f2d1a96b5234e9ed48e01eca64c74d9d98"} Jan 23 14:32:43 crc kubenswrapper[4775]: I0123 14:32:43.420478 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 23 14:32:43 crc kubenswrapper[4775]: I0123 14:32:43.431251 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" podStartSLOduration=2.431227715 podStartE2EDuration="2.431227715s" podCreationTimestamp="2026-01-23 14:32:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:32:43.424240367 +0000 UTC m=+1710.419069147" watchObservedRunningTime="2026-01-23 14:32:43.431227715 +0000 UTC m=+1710.426056485" Jan 23 14:32:43 crc kubenswrapper[4775]: I0123 14:32:43.469586 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" podStartSLOduration=2.469566277 podStartE2EDuration="2.469566277s" podCreationTimestamp="2026-01-23 14:32:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:32:43.451311322 +0000 UTC m=+1710.446140092" watchObservedRunningTime="2026-01-23 14:32:43.469566277 +0000 UTC m=+1710.464395027" Jan 23 14:32:43 crc kubenswrapper[4775]: I0123 14:32:43.967754 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 23 14:32:43 crc kubenswrapper[4775]: I0123 14:32:43.968484 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 23 14:32:43 crc kubenswrapper[4775]: I0123 14:32:43.992206 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 23 14:32:43 crc kubenswrapper[4775]: I0123 14:32:43.993155 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 23 14:32:45 crc kubenswrapper[4775]: I0123 14:32:45.133070 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-1" podUID="f3a307d6-651f-4f43-83ec-6d1e1118f7ad" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.167:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:32:45 crc kubenswrapper[4775]: I0123 14:32:45.133102 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-2" podUID="a771c767-804b-4c42-bfc9-e6982acea366" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.166:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:32:45 crc kubenswrapper[4775]: I0123 14:32:45.133152 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-1" podUID="f3a307d6-651f-4f43-83ec-6d1e1118f7ad" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.167:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:32:45 crc kubenswrapper[4775]: I0123 14:32:45.133161 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-2" podUID="a771c767-804b-4c42-bfc9-e6982acea366" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.166:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:32:45 crc kubenswrapper[4775]: I0123 14:32:45.386497 4775 scope.go:117] "RemoveContainer" containerID="16a5d90dc00db76cb146a3ab929aa58cbca67687a4216b85575b35f06530fd3a" Jan 23 14:32:45 crc kubenswrapper[4775]: I0123 14:32:45.476250 4775 scope.go:117] "RemoveContainer" containerID="50f2c96b0b5892a7771fccd5951249dad10d9735e71ae46903621151778752dd" Jan 23 14:32:45 crc kubenswrapper[4775]: I0123 14:32:45.520455 4775 scope.go:117] "RemoveContainer" containerID="3089717e59d9d63482e14d904b82257965098590f1b4c79bdacedb05c6060f6e" Jan 23 14:32:45 crc kubenswrapper[4775]: I0123 14:32:45.570289 4775 scope.go:117] "RemoveContainer" containerID="dfd2790cbd2b3023e0c67bf180e375a19d1caefe130ba7bcb469b97ad55122e0" Jan 23 14:32:45 crc kubenswrapper[4775]: I0123 14:32:45.607016 4775 scope.go:117] "RemoveContainer" containerID="cf5d6f96b976fd01d4f59841045416396d0e05c1aeb5c738f3b2003a516bd24d" Jan 23 14:32:45 crc kubenswrapper[4775]: I0123 14:32:45.643195 4775 scope.go:117] "RemoveContainer" containerID="ad4721fdee0a09d6f1ae7bbee38e4c36536b30b8fa6aaeaab9d4a101c5700669" Jan 23 14:32:45 crc kubenswrapper[4775]: I0123 14:32:45.668732 4775 scope.go:117] "RemoveContainer" containerID="8fbaa9880c81768fdeafd7a8d660d5afda75513a9354f9b29aea974cf6c99474" Jan 23 14:32:46 crc kubenswrapper[4775]: I0123 14:32:46.005025 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 23 14:32:46 crc kubenswrapper[4775]: I0123 14:32:46.012340 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 23 14:32:46 crc kubenswrapper[4775]: I0123 14:32:46.119645 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 23 14:32:46 crc kubenswrapper[4775]: I0123 14:32:46.119703 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 23 14:32:46 crc kubenswrapper[4775]: I0123 14:32:46.119723 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 23 14:32:46 crc kubenswrapper[4775]: I0123 14:32:46.119740 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 23 14:32:51 crc kubenswrapper[4775]: I0123 14:32:51.005254 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 23 14:32:51 crc kubenswrapper[4775]: I0123 14:32:51.012162 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 23 14:32:51 crc kubenswrapper[4775]: I0123 14:32:51.050852 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 23 14:32:51 crc kubenswrapper[4775]: I0123 14:32:51.053407 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 23 14:32:51 crc kubenswrapper[4775]: I0123 14:32:51.119779 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 23 14:32:51 crc kubenswrapper[4775]: I0123 14:32:51.120022 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 23 14:32:51 crc kubenswrapper[4775]: I0123 14:32:51.120093 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 23 14:32:51 crc kubenswrapper[4775]: I0123 14:32:51.120163 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 23 14:32:51 crc kubenswrapper[4775]: I0123 14:32:51.551760 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 23 14:32:51 crc kubenswrapper[4775]: I0123 14:32:51.556413 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 23 14:32:51 crc kubenswrapper[4775]: I0123 14:32:51.715574 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:32:51 crc kubenswrapper[4775]: E0123 14:32:51.716006 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:32:52 crc kubenswrapper[4775]: I0123 14:32:52.018032 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 23 14:32:52 crc kubenswrapper[4775]: I0123 14:32:52.036055 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 23 14:32:52 crc kubenswrapper[4775]: I0123 14:32:52.284961 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-1" podUID="1f8451d7-e2c8-4d37-838f-b5042ceabc86" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.172:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:32:52 crc kubenswrapper[4775]: I0123 14:32:52.285076 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-2" podUID="93e53da4-e769-460a-b299-07131d928b83" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.173:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:32:52 crc kubenswrapper[4775]: I0123 14:32:52.285111 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-1" podUID="1f8451d7-e2c8-4d37-838f-b5042ceabc86" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.172:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:32:52 crc kubenswrapper[4775]: I0123 14:32:52.285180 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-2" podUID="93e53da4-e769-460a-b299-07131d928b83" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.173:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:32:53 crc kubenswrapper[4775]: I0123 14:32:53.973432 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 23 14:32:53 crc kubenswrapper[4775]: I0123 14:32:53.974599 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 23 14:32:53 crc kubenswrapper[4775]: I0123 14:32:53.977230 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 23 14:32:53 crc kubenswrapper[4775]: I0123 14:32:53.980688 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 23 14:32:54 crc kubenswrapper[4775]: I0123 14:32:54.001996 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 23 14:32:54 crc kubenswrapper[4775]: I0123 14:32:54.003472 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 23 14:32:54 crc kubenswrapper[4775]: I0123 14:32:54.006091 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 23 14:32:54 crc kubenswrapper[4775]: I0123 14:32:54.008552 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 23 14:32:54 crc kubenswrapper[4775]: I0123 14:32:54.534669 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 23 14:32:54 crc kubenswrapper[4775]: I0123 14:32:54.534737 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 23 14:32:54 crc kubenswrapper[4775]: I0123 14:32:54.540377 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 23 14:32:54 crc kubenswrapper[4775]: I0123 14:32:54.545745 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 23 14:33:01 crc kubenswrapper[4775]: I0123 14:33:01.123211 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 23 14:33:01 crc kubenswrapper[4775]: I0123 14:33:01.124120 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 23 14:33:01 crc kubenswrapper[4775]: I0123 14:33:01.124222 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 23 14:33:01 crc kubenswrapper[4775]: I0123 14:33:01.125497 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 23 14:33:01 crc kubenswrapper[4775]: I0123 14:33:01.129346 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 23 14:33:01 crc kubenswrapper[4775]: I0123 14:33:01.129393 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 23 14:33:01 crc kubenswrapper[4775]: I0123 14:33:01.130236 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 23 14:33:01 crc kubenswrapper[4775]: I0123 14:33:01.630253 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 23 14:33:02 crc kubenswrapper[4775]: I0123 14:33:02.807388 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-2"] Jan 23 14:33:02 crc kubenswrapper[4775]: I0123 14:33:02.807694 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-2" podUID="a771c767-804b-4c42-bfc9-e6982acea366" containerName="nova-kuttl-api-log" containerID="cri-o://e07000a69e2d7dc25840cd7ce274cd0030fac44b2c6a54f98b1b488652900399" gracePeriod=30 Jan 23 14:33:02 crc kubenswrapper[4775]: I0123 14:33:02.808232 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-2" podUID="a771c767-804b-4c42-bfc9-e6982acea366" containerName="nova-kuttl-api-api" containerID="cri-o://c52e98be36b5967e54703c69ce278883c27920df3afb501eb24d5bdc613b7994" gracePeriod=30 Jan 23 14:33:02 crc kubenswrapper[4775]: I0123 14:33:02.829436 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-1"] Jan 23 14:33:02 crc kubenswrapper[4775]: I0123 14:33:02.829970 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-1" podUID="f3a307d6-651f-4f43-83ec-6d1e1118f7ad" containerName="nova-kuttl-api-log" containerID="cri-o://9f217b6d27d3178707e2d3c8f04dc73a49c41c68988a537f0e9b988da1e4a797" gracePeriod=30 Jan 23 14:33:02 crc kubenswrapper[4775]: I0123 14:33:02.829988 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-1" podUID="f3a307d6-651f-4f43-83ec-6d1e1118f7ad" containerName="nova-kuttl-api-api" containerID="cri-o://44b9d1efe3a792aaf862c7a3c79d1c143a7ce73765ff2de4b10ccbe7c4d3edbb" gracePeriod=30 Jan 23 14:33:03 crc kubenswrapper[4775]: I0123 14:33:03.216375 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-2"] Jan 23 14:33:03 crc kubenswrapper[4775]: I0123 14:33:03.216842 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" podUID="422f57ad-3c24-4af9-aa50-c17639a07403" containerName="nova-kuttl-cell0-conductor-conductor" containerID="cri-o://0059bf4c06697e64e01608439a11541844aa72b36b84e23abc3ad0bcb9f4abe1" gracePeriod=30 Jan 23 14:33:03 crc kubenswrapper[4775]: I0123 14:33:03.245333 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-1"] Jan 23 14:33:03 crc kubenswrapper[4775]: I0123 14:33:03.245654 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" podUID="93184515-7dbf-4aeb-823f-0146b2a66d39" containerName="nova-kuttl-cell0-conductor-conductor" containerID="cri-o://55b86529841f749494f871e3c1c9f9261bb198c398af7c06b847289681d88eec" gracePeriod=30 Jan 23 14:33:03 crc kubenswrapper[4775]: I0123 14:33:03.649594 4775 generic.go:334] "Generic (PLEG): container finished" podID="f3a307d6-651f-4f43-83ec-6d1e1118f7ad" containerID="9f217b6d27d3178707e2d3c8f04dc73a49c41c68988a537f0e9b988da1e4a797" exitCode=143 Jan 23 14:33:03 crc kubenswrapper[4775]: I0123 14:33:03.649703 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-1" event={"ID":"f3a307d6-651f-4f43-83ec-6d1e1118f7ad","Type":"ContainerDied","Data":"9f217b6d27d3178707e2d3c8f04dc73a49c41c68988a537f0e9b988da1e4a797"} Jan 23 14:33:03 crc kubenswrapper[4775]: I0123 14:33:03.652134 4775 generic.go:334] "Generic (PLEG): container finished" podID="a771c767-804b-4c42-bfc9-e6982acea366" containerID="e07000a69e2d7dc25840cd7ce274cd0030fac44b2c6a54f98b1b488652900399" exitCode=143 Jan 23 14:33:03 crc kubenswrapper[4775]: I0123 14:33:03.652214 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-2" event={"ID":"a771c767-804b-4c42-bfc9-e6982acea366","Type":"ContainerDied","Data":"e07000a69e2d7dc25840cd7ce274cd0030fac44b2c6a54f98b1b488652900399"} Jan 23 14:33:04 crc kubenswrapper[4775]: E0123 14:33:04.265226 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0059bf4c06697e64e01608439a11541844aa72b36b84e23abc3ad0bcb9f4abe1" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 23 14:33:04 crc kubenswrapper[4775]: E0123 14:33:04.267460 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0059bf4c06697e64e01608439a11541844aa72b36b84e23abc3ad0bcb9f4abe1" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 23 14:33:04 crc kubenswrapper[4775]: E0123 14:33:04.269658 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0059bf4c06697e64e01608439a11541844aa72b36b84e23abc3ad0bcb9f4abe1" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 23 14:33:04 crc kubenswrapper[4775]: E0123 14:33:04.269711 4775 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" podUID="422f57ad-3c24-4af9-aa50-c17639a07403" containerName="nova-kuttl-cell0-conductor-conductor" Jan 23 14:33:04 crc kubenswrapper[4775]: E0123 14:33:04.284733 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="55b86529841f749494f871e3c1c9f9261bb198c398af7c06b847289681d88eec" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 23 14:33:04 crc kubenswrapper[4775]: E0123 14:33:04.287010 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="55b86529841f749494f871e3c1c9f9261bb198c398af7c06b847289681d88eec" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 23 14:33:04 crc kubenswrapper[4775]: E0123 14:33:04.288957 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="55b86529841f749494f871e3c1c9f9261bb198c398af7c06b847289681d88eec" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 23 14:33:04 crc kubenswrapper[4775]: E0123 14:33:04.289018 4775 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" podUID="93184515-7dbf-4aeb-823f-0146b2a66d39" containerName="nova-kuttl-cell0-conductor-conductor" Jan 23 14:33:05 crc kubenswrapper[4775]: I0123 14:33:05.714618 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:33:05 crc kubenswrapper[4775]: E0123 14:33:05.715861 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:33:05 crc kubenswrapper[4775]: I0123 14:33:05.988303 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-api-1" podUID="f3a307d6-651f-4f43-83ec-6d1e1118f7ad" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.167:8774/\": read tcp 10.217.0.2:54538->10.217.0.167:8774: read: connection reset by peer" Jan 23 14:33:05 crc kubenswrapper[4775]: I0123 14:33:05.988906 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-api-1" podUID="f3a307d6-651f-4f43-83ec-6d1e1118f7ad" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.167:8774/\": read tcp 10.217.0.2:54542->10.217.0.167:8774: read: connection reset by peer" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.031925 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-api-2" podUID="a771c767-804b-4c42-bfc9-e6982acea366" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.166:8774/\": read tcp 10.217.0.2:45484->10.217.0.166:8774: read: connection reset by peer" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.031968 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-api-2" podUID="a771c767-804b-4c42-bfc9-e6982acea366" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.166:8774/\": read tcp 10.217.0.2:45474->10.217.0.166:8774: read: connection reset by peer" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.208961 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.213989 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.325565 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-985tt\" (UniqueName: \"kubernetes.io/projected/93184515-7dbf-4aeb-823f-0146b2a66d39-kube-api-access-985tt\") pod \"93184515-7dbf-4aeb-823f-0146b2a66d39\" (UID: \"93184515-7dbf-4aeb-823f-0146b2a66d39\") " Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.325615 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/422f57ad-3c24-4af9-aa50-c17639a07403-config-data\") pod \"422f57ad-3c24-4af9-aa50-c17639a07403\" (UID: \"422f57ad-3c24-4af9-aa50-c17639a07403\") " Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.325703 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93184515-7dbf-4aeb-823f-0146b2a66d39-config-data\") pod \"93184515-7dbf-4aeb-823f-0146b2a66d39\" (UID: \"93184515-7dbf-4aeb-823f-0146b2a66d39\") " Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.325729 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grnlj\" (UniqueName: \"kubernetes.io/projected/422f57ad-3c24-4af9-aa50-c17639a07403-kube-api-access-grnlj\") pod \"422f57ad-3c24-4af9-aa50-c17639a07403\" (UID: \"422f57ad-3c24-4af9-aa50-c17639a07403\") " Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.333118 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/422f57ad-3c24-4af9-aa50-c17639a07403-kube-api-access-grnlj" (OuterVolumeSpecName: "kube-api-access-grnlj") pod "422f57ad-3c24-4af9-aa50-c17639a07403" (UID: "422f57ad-3c24-4af9-aa50-c17639a07403"). InnerVolumeSpecName "kube-api-access-grnlj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.337924 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93184515-7dbf-4aeb-823f-0146b2a66d39-kube-api-access-985tt" (OuterVolumeSpecName: "kube-api-access-985tt") pod "93184515-7dbf-4aeb-823f-0146b2a66d39" (UID: "93184515-7dbf-4aeb-823f-0146b2a66d39"). InnerVolumeSpecName "kube-api-access-985tt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.356045 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/422f57ad-3c24-4af9-aa50-c17639a07403-config-data" (OuterVolumeSpecName: "config-data") pod "422f57ad-3c24-4af9-aa50-c17639a07403" (UID: "422f57ad-3c24-4af9-aa50-c17639a07403"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.357649 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93184515-7dbf-4aeb-823f-0146b2a66d39-config-data" (OuterVolumeSpecName: "config-data") pod "93184515-7dbf-4aeb-823f-0146b2a66d39" (UID: "93184515-7dbf-4aeb-823f-0146b2a66d39"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.374999 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.428477 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-985tt\" (UniqueName: \"kubernetes.io/projected/93184515-7dbf-4aeb-823f-0146b2a66d39-kube-api-access-985tt\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.428542 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/422f57ad-3c24-4af9-aa50-c17639a07403-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.428554 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93184515-7dbf-4aeb-823f-0146b2a66d39-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.428579 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grnlj\" (UniqueName: \"kubernetes.io/projected/422f57ad-3c24-4af9-aa50-c17639a07403-kube-api-access-grnlj\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.454319 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.529579 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3a307d6-651f-4f43-83ec-6d1e1118f7ad-logs\") pod \"f3a307d6-651f-4f43-83ec-6d1e1118f7ad\" (UID: \"f3a307d6-651f-4f43-83ec-6d1e1118f7ad\") " Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.531302 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3a307d6-651f-4f43-83ec-6d1e1118f7ad-config-data\") pod \"f3a307d6-651f-4f43-83ec-6d1e1118f7ad\" (UID: \"f3a307d6-651f-4f43-83ec-6d1e1118f7ad\") " Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.531464 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45pc5\" (UniqueName: \"kubernetes.io/projected/f3a307d6-651f-4f43-83ec-6d1e1118f7ad-kube-api-access-45pc5\") pod \"f3a307d6-651f-4f43-83ec-6d1e1118f7ad\" (UID: \"f3a307d6-651f-4f43-83ec-6d1e1118f7ad\") " Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.532371 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3a307d6-651f-4f43-83ec-6d1e1118f7ad-logs" (OuterVolumeSpecName: "logs") pod "f3a307d6-651f-4f43-83ec-6d1e1118f7ad" (UID: "f3a307d6-651f-4f43-83ec-6d1e1118f7ad"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.535228 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3a307d6-651f-4f43-83ec-6d1e1118f7ad-kube-api-access-45pc5" (OuterVolumeSpecName: "kube-api-access-45pc5") pod "f3a307d6-651f-4f43-83ec-6d1e1118f7ad" (UID: "f3a307d6-651f-4f43-83ec-6d1e1118f7ad"). InnerVolumeSpecName "kube-api-access-45pc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.550521 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3a307d6-651f-4f43-83ec-6d1e1118f7ad-config-data" (OuterVolumeSpecName: "config-data") pod "f3a307d6-651f-4f43-83ec-6d1e1118f7ad" (UID: "f3a307d6-651f-4f43-83ec-6d1e1118f7ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.632662 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckrlr\" (UniqueName: \"kubernetes.io/projected/a771c767-804b-4c42-bfc9-e6982acea366-kube-api-access-ckrlr\") pod \"a771c767-804b-4c42-bfc9-e6982acea366\" (UID: \"a771c767-804b-4c42-bfc9-e6982acea366\") " Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.633531 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a771c767-804b-4c42-bfc9-e6982acea366-config-data\") pod \"a771c767-804b-4c42-bfc9-e6982acea366\" (UID: \"a771c767-804b-4c42-bfc9-e6982acea366\") " Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.633731 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a771c767-804b-4c42-bfc9-e6982acea366-logs\") pod \"a771c767-804b-4c42-bfc9-e6982acea366\" (UID: \"a771c767-804b-4c42-bfc9-e6982acea366\") " Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.634257 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3a307d6-651f-4f43-83ec-6d1e1118f7ad-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.634365 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3a307d6-651f-4f43-83ec-6d1e1118f7ad-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.634447 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45pc5\" (UniqueName: \"kubernetes.io/projected/f3a307d6-651f-4f43-83ec-6d1e1118f7ad-kube-api-access-45pc5\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.634762 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a771c767-804b-4c42-bfc9-e6982acea366-logs" (OuterVolumeSpecName: "logs") pod "a771c767-804b-4c42-bfc9-e6982acea366" (UID: "a771c767-804b-4c42-bfc9-e6982acea366"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.635261 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a771c767-804b-4c42-bfc9-e6982acea366-kube-api-access-ckrlr" (OuterVolumeSpecName: "kube-api-access-ckrlr") pod "a771c767-804b-4c42-bfc9-e6982acea366" (UID: "a771c767-804b-4c42-bfc9-e6982acea366"). InnerVolumeSpecName "kube-api-access-ckrlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.670094 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a771c767-804b-4c42-bfc9-e6982acea366-config-data" (OuterVolumeSpecName: "config-data") pod "a771c767-804b-4c42-bfc9-e6982acea366" (UID: "a771c767-804b-4c42-bfc9-e6982acea366"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.685267 4775 generic.go:334] "Generic (PLEG): container finished" podID="f3a307d6-651f-4f43-83ec-6d1e1118f7ad" containerID="44b9d1efe3a792aaf862c7a3c79d1c143a7ce73765ff2de4b10ccbe7c4d3edbb" exitCode=0 Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.685364 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-1" event={"ID":"f3a307d6-651f-4f43-83ec-6d1e1118f7ad","Type":"ContainerDied","Data":"44b9d1efe3a792aaf862c7a3c79d1c143a7ce73765ff2de4b10ccbe7c4d3edbb"} Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.685712 4775 scope.go:117] "RemoveContainer" containerID="44b9d1efe3a792aaf862c7a3c79d1c143a7ce73765ff2de4b10ccbe7c4d3edbb" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.686048 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-1" event={"ID":"f3a307d6-651f-4f43-83ec-6d1e1118f7ad","Type":"ContainerDied","Data":"2dcf48bbe2320b010b20887999fecd4308d678fb9880db6769d32c78a7d14c47"} Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.686365 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-1" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.687982 4775 generic.go:334] "Generic (PLEG): container finished" podID="422f57ad-3c24-4af9-aa50-c17639a07403" containerID="0059bf4c06697e64e01608439a11541844aa72b36b84e23abc3ad0bcb9f4abe1" exitCode=0 Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.688123 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.688164 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" event={"ID":"422f57ad-3c24-4af9-aa50-c17639a07403","Type":"ContainerDied","Data":"0059bf4c06697e64e01608439a11541844aa72b36b84e23abc3ad0bcb9f4abe1"} Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.688607 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-2" event={"ID":"422f57ad-3c24-4af9-aa50-c17639a07403","Type":"ContainerDied","Data":"13f3d5061361bcece8ecd154ec4ce1dd8f57aa77665423267627e59266ce27ed"} Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.697833 4775 generic.go:334] "Generic (PLEG): container finished" podID="a771c767-804b-4c42-bfc9-e6982acea366" containerID="c52e98be36b5967e54703c69ce278883c27920df3afb501eb24d5bdc613b7994" exitCode=0 Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.697890 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-2" event={"ID":"a771c767-804b-4c42-bfc9-e6982acea366","Type":"ContainerDied","Data":"c52e98be36b5967e54703c69ce278883c27920df3afb501eb24d5bdc613b7994"} Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.697913 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-2" event={"ID":"a771c767-804b-4c42-bfc9-e6982acea366","Type":"ContainerDied","Data":"53f1805ce8ed107c85194e2afb2fe0fc7531107d1bd37dd54eace53ff7e081e3"} Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.697970 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-2" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.715354 4775 scope.go:117] "RemoveContainer" containerID="9f217b6d27d3178707e2d3c8f04dc73a49c41c68988a537f0e9b988da1e4a797" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.720953 4775 generic.go:334] "Generic (PLEG): container finished" podID="93184515-7dbf-4aeb-823f-0146b2a66d39" containerID="55b86529841f749494f871e3c1c9f9261bb198c398af7c06b847289681d88eec" exitCode=0 Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.721001 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" event={"ID":"93184515-7dbf-4aeb-823f-0146b2a66d39","Type":"ContainerDied","Data":"55b86529841f749494f871e3c1c9f9261bb198c398af7c06b847289681d88eec"} Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.721305 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" event={"ID":"93184515-7dbf-4aeb-823f-0146b2a66d39","Type":"ContainerDied","Data":"7f633d05d3eeb44bacac1fe7b01d7340207dd706030a31990ae0908b4cb1ede1"} Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.721030 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-1" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.735726 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-2"] Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.741817 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a771c767-804b-4c42-bfc9-e6982acea366-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.741849 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckrlr\" (UniqueName: \"kubernetes.io/projected/a771c767-804b-4c42-bfc9-e6982acea366-kube-api-access-ckrlr\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.741880 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a771c767-804b-4c42-bfc9-e6982acea366-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.742414 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-2"] Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.743332 4775 scope.go:117] "RemoveContainer" containerID="44b9d1efe3a792aaf862c7a3c79d1c143a7ce73765ff2de4b10ccbe7c4d3edbb" Jan 23 14:33:06 crc kubenswrapper[4775]: E0123 14:33:06.745492 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44b9d1efe3a792aaf862c7a3c79d1c143a7ce73765ff2de4b10ccbe7c4d3edbb\": container with ID starting with 44b9d1efe3a792aaf862c7a3c79d1c143a7ce73765ff2de4b10ccbe7c4d3edbb not found: ID does not exist" containerID="44b9d1efe3a792aaf862c7a3c79d1c143a7ce73765ff2de4b10ccbe7c4d3edbb" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.745525 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44b9d1efe3a792aaf862c7a3c79d1c143a7ce73765ff2de4b10ccbe7c4d3edbb"} err="failed to get container status \"44b9d1efe3a792aaf862c7a3c79d1c143a7ce73765ff2de4b10ccbe7c4d3edbb\": rpc error: code = NotFound desc = could not find container \"44b9d1efe3a792aaf862c7a3c79d1c143a7ce73765ff2de4b10ccbe7c4d3edbb\": container with ID starting with 44b9d1efe3a792aaf862c7a3c79d1c143a7ce73765ff2de4b10ccbe7c4d3edbb not found: ID does not exist" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.745574 4775 scope.go:117] "RemoveContainer" containerID="9f217b6d27d3178707e2d3c8f04dc73a49c41c68988a537f0e9b988da1e4a797" Jan 23 14:33:06 crc kubenswrapper[4775]: E0123 14:33:06.749432 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f217b6d27d3178707e2d3c8f04dc73a49c41c68988a537f0e9b988da1e4a797\": container with ID starting with 9f217b6d27d3178707e2d3c8f04dc73a49c41c68988a537f0e9b988da1e4a797 not found: ID does not exist" containerID="9f217b6d27d3178707e2d3c8f04dc73a49c41c68988a537f0e9b988da1e4a797" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.749457 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f217b6d27d3178707e2d3c8f04dc73a49c41c68988a537f0e9b988da1e4a797"} err="failed to get container status \"9f217b6d27d3178707e2d3c8f04dc73a49c41c68988a537f0e9b988da1e4a797\": rpc error: code = NotFound desc = could not find container \"9f217b6d27d3178707e2d3c8f04dc73a49c41c68988a537f0e9b988da1e4a797\": container with ID starting with 9f217b6d27d3178707e2d3c8f04dc73a49c41c68988a537f0e9b988da1e4a797 not found: ID does not exist" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.749479 4775 scope.go:117] "RemoveContainer" containerID="0059bf4c06697e64e01608439a11541844aa72b36b84e23abc3ad0bcb9f4abe1" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.758226 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-1"] Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.767070 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-1"] Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.781113 4775 scope.go:117] "RemoveContainer" containerID="0059bf4c06697e64e01608439a11541844aa72b36b84e23abc3ad0bcb9f4abe1" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.784447 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-2"] Jan 23 14:33:06 crc kubenswrapper[4775]: E0123 14:33:06.784476 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0059bf4c06697e64e01608439a11541844aa72b36b84e23abc3ad0bcb9f4abe1\": container with ID starting with 0059bf4c06697e64e01608439a11541844aa72b36b84e23abc3ad0bcb9f4abe1 not found: ID does not exist" containerID="0059bf4c06697e64e01608439a11541844aa72b36b84e23abc3ad0bcb9f4abe1" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.784512 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0059bf4c06697e64e01608439a11541844aa72b36b84e23abc3ad0bcb9f4abe1"} err="failed to get container status \"0059bf4c06697e64e01608439a11541844aa72b36b84e23abc3ad0bcb9f4abe1\": rpc error: code = NotFound desc = could not find container \"0059bf4c06697e64e01608439a11541844aa72b36b84e23abc3ad0bcb9f4abe1\": container with ID starting with 0059bf4c06697e64e01608439a11541844aa72b36b84e23abc3ad0bcb9f4abe1 not found: ID does not exist" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.784537 4775 scope.go:117] "RemoveContainer" containerID="c52e98be36b5967e54703c69ce278883c27920df3afb501eb24d5bdc613b7994" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.794113 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-2"] Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.801998 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-1"] Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.811326 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-1"] Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.816584 4775 scope.go:117] "RemoveContainer" containerID="e07000a69e2d7dc25840cd7ce274cd0030fac44b2c6a54f98b1b488652900399" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.876849 4775 scope.go:117] "RemoveContainer" containerID="c52e98be36b5967e54703c69ce278883c27920df3afb501eb24d5bdc613b7994" Jan 23 14:33:06 crc kubenswrapper[4775]: E0123 14:33:06.877299 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c52e98be36b5967e54703c69ce278883c27920df3afb501eb24d5bdc613b7994\": container with ID starting with c52e98be36b5967e54703c69ce278883c27920df3afb501eb24d5bdc613b7994 not found: ID does not exist" containerID="c52e98be36b5967e54703c69ce278883c27920df3afb501eb24d5bdc613b7994" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.877330 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c52e98be36b5967e54703c69ce278883c27920df3afb501eb24d5bdc613b7994"} err="failed to get container status \"c52e98be36b5967e54703c69ce278883c27920df3afb501eb24d5bdc613b7994\": rpc error: code = NotFound desc = could not find container \"c52e98be36b5967e54703c69ce278883c27920df3afb501eb24d5bdc613b7994\": container with ID starting with c52e98be36b5967e54703c69ce278883c27920df3afb501eb24d5bdc613b7994 not found: ID does not exist" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.877351 4775 scope.go:117] "RemoveContainer" containerID="e07000a69e2d7dc25840cd7ce274cd0030fac44b2c6a54f98b1b488652900399" Jan 23 14:33:06 crc kubenswrapper[4775]: E0123 14:33:06.877572 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e07000a69e2d7dc25840cd7ce274cd0030fac44b2c6a54f98b1b488652900399\": container with ID starting with e07000a69e2d7dc25840cd7ce274cd0030fac44b2c6a54f98b1b488652900399 not found: ID does not exist" containerID="e07000a69e2d7dc25840cd7ce274cd0030fac44b2c6a54f98b1b488652900399" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.877597 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e07000a69e2d7dc25840cd7ce274cd0030fac44b2c6a54f98b1b488652900399"} err="failed to get container status \"e07000a69e2d7dc25840cd7ce274cd0030fac44b2c6a54f98b1b488652900399\": rpc error: code = NotFound desc = could not find container \"e07000a69e2d7dc25840cd7ce274cd0030fac44b2c6a54f98b1b488652900399\": container with ID starting with e07000a69e2d7dc25840cd7ce274cd0030fac44b2c6a54f98b1b488652900399 not found: ID does not exist" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.877611 4775 scope.go:117] "RemoveContainer" containerID="55b86529841f749494f871e3c1c9f9261bb198c398af7c06b847289681d88eec" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.897008 4775 scope.go:117] "RemoveContainer" containerID="55b86529841f749494f871e3c1c9f9261bb198c398af7c06b847289681d88eec" Jan 23 14:33:06 crc kubenswrapper[4775]: E0123 14:33:06.897336 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55b86529841f749494f871e3c1c9f9261bb198c398af7c06b847289681d88eec\": container with ID starting with 55b86529841f749494f871e3c1c9f9261bb198c398af7c06b847289681d88eec not found: ID does not exist" containerID="55b86529841f749494f871e3c1c9f9261bb198c398af7c06b847289681d88eec" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.897384 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55b86529841f749494f871e3c1c9f9261bb198c398af7c06b847289681d88eec"} err="failed to get container status \"55b86529841f749494f871e3c1c9f9261bb198c398af7c06b847289681d88eec\": rpc error: code = NotFound desc = could not find container \"55b86529841f749494f871e3c1c9f9261bb198c398af7c06b847289681d88eec\": container with ID starting with 55b86529841f749494f871e3c1c9f9261bb198c398af7c06b847289681d88eec not found: ID does not exist" Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.959054 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-2"] Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.959254 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-2" podUID="3429d990-e795-4241-bb25-8871be747a75" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://9a528ef489a9b4b96d6b67753c9ecaca53ab635c1fd73d9b5c4711af0a4c42da" gracePeriod=30 Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.969618 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-1"] Jan 23 14:33:06 crc kubenswrapper[4775]: I0123 14:33:06.969839 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-1" podUID="ec05960b-b36c-408b-af7e-3b5b312882fc" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://736d171dedb2f0da8c4e5fe544bfa3a87f50cc86a3bce080473d5aa3898538f2" gracePeriod=30 Jan 23 14:33:07 crc kubenswrapper[4775]: I0123 14:33:07.014704 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-2"] Jan 23 14:33:07 crc kubenswrapper[4775]: I0123 14:33:07.017294 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-2" podUID="93e53da4-e769-460a-b299-07131d928b83" containerName="nova-kuttl-metadata-log" containerID="cri-o://a22aeb3b9de2421ab074e1ffccb7be34ee4fb1066458792f0e03da32e3b371bc" gracePeriod=30 Jan 23 14:33:07 crc kubenswrapper[4775]: I0123 14:33:07.017697 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-2" podUID="93e53da4-e769-460a-b299-07131d928b83" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://d3a4946fb4d2fe2a9a5683281e70f94df6d1c65d02c5f7cebaeee17f058e4a65" gracePeriod=30 Jan 23 14:33:07 crc kubenswrapper[4775]: I0123 14:33:07.040407 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-1"] Jan 23 14:33:07 crc kubenswrapper[4775]: I0123 14:33:07.040610 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-1" podUID="1f8451d7-e2c8-4d37-838f-b5042ceabc86" containerName="nova-kuttl-metadata-log" containerID="cri-o://8bb734600e802f925272d19ee91b082bc20a92958621709db8bcda1373be8cd5" gracePeriod=30 Jan 23 14:33:07 crc kubenswrapper[4775]: I0123 14:33:07.040742 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-1" podUID="1f8451d7-e2c8-4d37-838f-b5042ceabc86" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://99eb7e2e686344d06bacb14b07ca9db3cf66056ae2537d284693419e0f8c15e3" gracePeriod=30 Jan 23 14:33:07 crc kubenswrapper[4775]: I0123 14:33:07.278432 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-2"] Jan 23 14:33:07 crc kubenswrapper[4775]: I0123 14:33:07.278672 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" podUID="cd9699c7-620b-45ed-9acf-d8d68558592a" containerName="nova-kuttl-cell1-conductor-conductor" containerID="cri-o://a7f5a876f1b2c9412ba3369766da30eec726860b52b560828567dc91661b80f6" gracePeriod=30 Jan 23 14:33:07 crc kubenswrapper[4775]: I0123 14:33:07.283774 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-1"] Jan 23 14:33:07 crc kubenswrapper[4775]: I0123 14:33:07.284007 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" podUID="899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b" containerName="nova-kuttl-cell1-conductor-conductor" containerID="cri-o://3436442dbce900098e0a3b947a5679828fe40f90f8fc710a8899b8572b5ad5cc" gracePeriod=30 Jan 23 14:33:07 crc kubenswrapper[4775]: I0123 14:33:07.732983 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="422f57ad-3c24-4af9-aa50-c17639a07403" path="/var/lib/kubelet/pods/422f57ad-3c24-4af9-aa50-c17639a07403/volumes" Jan 23 14:33:07 crc kubenswrapper[4775]: I0123 14:33:07.733553 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93184515-7dbf-4aeb-823f-0146b2a66d39" path="/var/lib/kubelet/pods/93184515-7dbf-4aeb-823f-0146b2a66d39/volumes" Jan 23 14:33:07 crc kubenswrapper[4775]: I0123 14:33:07.734199 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a771c767-804b-4c42-bfc9-e6982acea366" path="/var/lib/kubelet/pods/a771c767-804b-4c42-bfc9-e6982acea366/volumes" Jan 23 14:33:07 crc kubenswrapper[4775]: I0123 14:33:07.735239 4775 generic.go:334] "Generic (PLEG): container finished" podID="1f8451d7-e2c8-4d37-838f-b5042ceabc86" containerID="8bb734600e802f925272d19ee91b082bc20a92958621709db8bcda1373be8cd5" exitCode=143 Jan 23 14:33:07 crc kubenswrapper[4775]: I0123 14:33:07.735404 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3a307d6-651f-4f43-83ec-6d1e1118f7ad" path="/var/lib/kubelet/pods/f3a307d6-651f-4f43-83ec-6d1e1118f7ad/volumes" Jan 23 14:33:07 crc kubenswrapper[4775]: I0123 14:33:07.735980 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-1" event={"ID":"1f8451d7-e2c8-4d37-838f-b5042ceabc86","Type":"ContainerDied","Data":"8bb734600e802f925272d19ee91b082bc20a92958621709db8bcda1373be8cd5"} Jan 23 14:33:07 crc kubenswrapper[4775]: I0123 14:33:07.739409 4775 generic.go:334] "Generic (PLEG): container finished" podID="93e53da4-e769-460a-b299-07131d928b83" containerID="a22aeb3b9de2421ab074e1ffccb7be34ee4fb1066458792f0e03da32e3b371bc" exitCode=143 Jan 23 14:33:07 crc kubenswrapper[4775]: I0123 14:33:07.739435 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-2" event={"ID":"93e53da4-e769-460a-b299-07131d928b83","Type":"ContainerDied","Data":"a22aeb3b9de2421ab074e1ffccb7be34ee4fb1066458792f0e03da32e3b371bc"} Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.130956 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.211432 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.269159 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3429d990-e795-4241-bb25-8871be747a75-config-data\") pod \"3429d990-e795-4241-bb25-8871be747a75\" (UID: \"3429d990-e795-4241-bb25-8871be747a75\") " Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.269310 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4ghk\" (UniqueName: \"kubernetes.io/projected/3429d990-e795-4241-bb25-8871be747a75-kube-api-access-v4ghk\") pod \"3429d990-e795-4241-bb25-8871be747a75\" (UID: \"3429d990-e795-4241-bb25-8871be747a75\") " Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.276932 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3429d990-e795-4241-bb25-8871be747a75-kube-api-access-v4ghk" (OuterVolumeSpecName: "kube-api-access-v4ghk") pod "3429d990-e795-4241-bb25-8871be747a75" (UID: "3429d990-e795-4241-bb25-8871be747a75"). InnerVolumeSpecName "kube-api-access-v4ghk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.289757 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3429d990-e795-4241-bb25-8871be747a75-config-data" (OuterVolumeSpecName: "config-data") pod "3429d990-e795-4241-bb25-8871be747a75" (UID: "3429d990-e795-4241-bb25-8871be747a75"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.371051 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqdpl\" (UniqueName: \"kubernetes.io/projected/ec05960b-b36c-408b-af7e-3b5b312882fc-kube-api-access-tqdpl\") pod \"ec05960b-b36c-408b-af7e-3b5b312882fc\" (UID: \"ec05960b-b36c-408b-af7e-3b5b312882fc\") " Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.371425 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec05960b-b36c-408b-af7e-3b5b312882fc-config-data\") pod \"ec05960b-b36c-408b-af7e-3b5b312882fc\" (UID: \"ec05960b-b36c-408b-af7e-3b5b312882fc\") " Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.372131 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3429d990-e795-4241-bb25-8871be747a75-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.372266 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4ghk\" (UniqueName: \"kubernetes.io/projected/3429d990-e795-4241-bb25-8871be747a75-kube-api-access-v4ghk\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.374446 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec05960b-b36c-408b-af7e-3b5b312882fc-kube-api-access-tqdpl" (OuterVolumeSpecName: "kube-api-access-tqdpl") pod "ec05960b-b36c-408b-af7e-3b5b312882fc" (UID: "ec05960b-b36c-408b-af7e-3b5b312882fc"). InnerVolumeSpecName "kube-api-access-tqdpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.398316 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec05960b-b36c-408b-af7e-3b5b312882fc-config-data" (OuterVolumeSpecName: "config-data") pod "ec05960b-b36c-408b-af7e-3b5b312882fc" (UID: "ec05960b-b36c-408b-af7e-3b5b312882fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.473981 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqdpl\" (UniqueName: \"kubernetes.io/projected/ec05960b-b36c-408b-af7e-3b5b312882fc-kube-api-access-tqdpl\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.474414 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec05960b-b36c-408b-af7e-3b5b312882fc-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.756743 4775 generic.go:334] "Generic (PLEG): container finished" podID="3429d990-e795-4241-bb25-8871be747a75" containerID="9a528ef489a9b4b96d6b67753c9ecaca53ab635c1fd73d9b5c4711af0a4c42da" exitCode=0 Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.756854 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-2" event={"ID":"3429d990-e795-4241-bb25-8871be747a75","Type":"ContainerDied","Data":"9a528ef489a9b4b96d6b67753c9ecaca53ab635c1fd73d9b5c4711af0a4c42da"} Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.756888 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-2" event={"ID":"3429d990-e795-4241-bb25-8871be747a75","Type":"ContainerDied","Data":"69f48396ed04cbb0dab3b9624bbf068a6a9d790e274fbfcc9a62a2e58dc96c61"} Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.756942 4775 scope.go:117] "RemoveContainer" containerID="9a528ef489a9b4b96d6b67753c9ecaca53ab635c1fd73d9b5c4711af0a4c42da" Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.757461 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-2" Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.768542 4775 generic.go:334] "Generic (PLEG): container finished" podID="ec05960b-b36c-408b-af7e-3b5b312882fc" containerID="736d171dedb2f0da8c4e5fe544bfa3a87f50cc86a3bce080473d5aa3898538f2" exitCode=0 Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.768603 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-1" Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.768625 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-1" event={"ID":"ec05960b-b36c-408b-af7e-3b5b312882fc","Type":"ContainerDied","Data":"736d171dedb2f0da8c4e5fe544bfa3a87f50cc86a3bce080473d5aa3898538f2"} Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.768790 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-1" event={"ID":"ec05960b-b36c-408b-af7e-3b5b312882fc","Type":"ContainerDied","Data":"b8e794aac4ed73289d855382e742fffe4df1b0a69c82118afa283730ec3b3a07"} Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.772424 4775 generic.go:334] "Generic (PLEG): container finished" podID="cd9699c7-620b-45ed-9acf-d8d68558592a" containerID="a7f5a876f1b2c9412ba3369766da30eec726860b52b560828567dc91661b80f6" exitCode=0 Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.772508 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" event={"ID":"cd9699c7-620b-45ed-9acf-d8d68558592a","Type":"ContainerDied","Data":"a7f5a876f1b2c9412ba3369766da30eec726860b52b560828567dc91661b80f6"} Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.852245 4775 scope.go:117] "RemoveContainer" containerID="9a528ef489a9b4b96d6b67753c9ecaca53ab635c1fd73d9b5c4711af0a4c42da" Jan 23 14:33:08 crc kubenswrapper[4775]: E0123 14:33:08.853038 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a528ef489a9b4b96d6b67753c9ecaca53ab635c1fd73d9b5c4711af0a4c42da\": container with ID starting with 9a528ef489a9b4b96d6b67753c9ecaca53ab635c1fd73d9b5c4711af0a4c42da not found: ID does not exist" containerID="9a528ef489a9b4b96d6b67753c9ecaca53ab635c1fd73d9b5c4711af0a4c42da" Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.853089 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a528ef489a9b4b96d6b67753c9ecaca53ab635c1fd73d9b5c4711af0a4c42da"} err="failed to get container status \"9a528ef489a9b4b96d6b67753c9ecaca53ab635c1fd73d9b5c4711af0a4c42da\": rpc error: code = NotFound desc = could not find container \"9a528ef489a9b4b96d6b67753c9ecaca53ab635c1fd73d9b5c4711af0a4c42da\": container with ID starting with 9a528ef489a9b4b96d6b67753c9ecaca53ab635c1fd73d9b5c4711af0a4c42da not found: ID does not exist" Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.853118 4775 scope.go:117] "RemoveContainer" containerID="736d171dedb2f0da8c4e5fe544bfa3a87f50cc86a3bce080473d5aa3898538f2" Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.947394 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.962306 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-2"] Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.974181 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-2"] Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.990036 4775 scope.go:117] "RemoveContainer" containerID="736d171dedb2f0da8c4e5fe544bfa3a87f50cc86a3bce080473d5aa3898538f2" Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.991895 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-1"] Jan 23 14:33:08 crc kubenswrapper[4775]: E0123 14:33:08.998450 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"736d171dedb2f0da8c4e5fe544bfa3a87f50cc86a3bce080473d5aa3898538f2\": container with ID starting with 736d171dedb2f0da8c4e5fe544bfa3a87f50cc86a3bce080473d5aa3898538f2 not found: ID does not exist" containerID="736d171dedb2f0da8c4e5fe544bfa3a87f50cc86a3bce080473d5aa3898538f2" Jan 23 14:33:08 crc kubenswrapper[4775]: I0123 14:33:08.998504 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"736d171dedb2f0da8c4e5fe544bfa3a87f50cc86a3bce080473d5aa3898538f2"} err="failed to get container status \"736d171dedb2f0da8c4e5fe544bfa3a87f50cc86a3bce080473d5aa3898538f2\": rpc error: code = NotFound desc = could not find container \"736d171dedb2f0da8c4e5fe544bfa3a87f50cc86a3bce080473d5aa3898538f2\": container with ID starting with 736d171dedb2f0da8c4e5fe544bfa3a87f50cc86a3bce080473d5aa3898538f2 not found: ID does not exist" Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.021266 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-1"] Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.091873 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qm4rh\" (UniqueName: \"kubernetes.io/projected/cd9699c7-620b-45ed-9acf-d8d68558592a-kube-api-access-qm4rh\") pod \"cd9699c7-620b-45ed-9acf-d8d68558592a\" (UID: \"cd9699c7-620b-45ed-9acf-d8d68558592a\") " Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.091971 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd9699c7-620b-45ed-9acf-d8d68558592a-config-data\") pod \"cd9699c7-620b-45ed-9acf-d8d68558592a\" (UID: \"cd9699c7-620b-45ed-9acf-d8d68558592a\") " Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.096352 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd9699c7-620b-45ed-9acf-d8d68558592a-kube-api-access-qm4rh" (OuterVolumeSpecName: "kube-api-access-qm4rh") pod "cd9699c7-620b-45ed-9acf-d8d68558592a" (UID: "cd9699c7-620b-45ed-9acf-d8d68558592a"). InnerVolumeSpecName "kube-api-access-qm4rh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.113318 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd9699c7-620b-45ed-9acf-d8d68558592a-config-data" (OuterVolumeSpecName: "config-data") pod "cd9699c7-620b-45ed-9acf-d8d68558592a" (UID: "cd9699c7-620b-45ed-9acf-d8d68558592a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.152548 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.199088 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qm4rh\" (UniqueName: \"kubernetes.io/projected/cd9699c7-620b-45ed-9acf-d8d68558592a-kube-api-access-qm4rh\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.199146 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd9699c7-620b-45ed-9acf-d8d68558592a-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.300256 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csm4d\" (UniqueName: \"kubernetes.io/projected/899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b-kube-api-access-csm4d\") pod \"899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b\" (UID: \"899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b\") " Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.300349 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b-config-data\") pod \"899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b\" (UID: \"899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b\") " Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.304464 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b-kube-api-access-csm4d" (OuterVolumeSpecName: "kube-api-access-csm4d") pod "899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b" (UID: "899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b"). InnerVolumeSpecName "kube-api-access-csm4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.326936 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b-config-data" (OuterVolumeSpecName: "config-data") pod "899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b" (UID: "899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.402988 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csm4d\" (UniqueName: \"kubernetes.io/projected/899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b-kube-api-access-csm4d\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.403276 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.736771 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3429d990-e795-4241-bb25-8871be747a75" path="/var/lib/kubelet/pods/3429d990-e795-4241-bb25-8871be747a75/volumes" Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.737944 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec05960b-b36c-408b-af7e-3b5b312882fc" path="/var/lib/kubelet/pods/ec05960b-b36c-408b-af7e-3b5b312882fc/volumes" Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.805229 4775 generic.go:334] "Generic (PLEG): container finished" podID="899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b" containerID="3436442dbce900098e0a3b947a5679828fe40f90f8fc710a8899b8572b5ad5cc" exitCode=0 Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.805280 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" event={"ID":"899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b","Type":"ContainerDied","Data":"3436442dbce900098e0a3b947a5679828fe40f90f8fc710a8899b8572b5ad5cc"} Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.805340 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" event={"ID":"899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b","Type":"ContainerDied","Data":"ff7647e6e88e0b90a60d3df6c11bd8f2d1a96b5234e9ed48e01eca64c74d9d98"} Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.805363 4775 scope.go:117] "RemoveContainer" containerID="3436442dbce900098e0a3b947a5679828fe40f90f8fc710a8899b8572b5ad5cc" Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.806932 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-1" Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.807661 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" event={"ID":"cd9699c7-620b-45ed-9acf-d8d68558592a","Type":"ContainerDied","Data":"8b77f3af6b0f185a5161fdaa5749b6a9f045ed71bd64a9e9cc6ddd8f8cc700d4"} Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.807731 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-2" Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.838605 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-2"] Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.842318 4775 scope.go:117] "RemoveContainer" containerID="3436442dbce900098e0a3b947a5679828fe40f90f8fc710a8899b8572b5ad5cc" Jan 23 14:33:09 crc kubenswrapper[4775]: E0123 14:33:09.842920 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3436442dbce900098e0a3b947a5679828fe40f90f8fc710a8899b8572b5ad5cc\": container with ID starting with 3436442dbce900098e0a3b947a5679828fe40f90f8fc710a8899b8572b5ad5cc not found: ID does not exist" containerID="3436442dbce900098e0a3b947a5679828fe40f90f8fc710a8899b8572b5ad5cc" Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.842952 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3436442dbce900098e0a3b947a5679828fe40f90f8fc710a8899b8572b5ad5cc"} err="failed to get container status \"3436442dbce900098e0a3b947a5679828fe40f90f8fc710a8899b8572b5ad5cc\": rpc error: code = NotFound desc = could not find container \"3436442dbce900098e0a3b947a5679828fe40f90f8fc710a8899b8572b5ad5cc\": container with ID starting with 3436442dbce900098e0a3b947a5679828fe40f90f8fc710a8899b8572b5ad5cc not found: ID does not exist" Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.842973 4775 scope.go:117] "RemoveContainer" containerID="a7f5a876f1b2c9412ba3369766da30eec726860b52b560828567dc91661b80f6" Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.852560 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-2"] Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.863881 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-1"] Jan 23 14:33:09 crc kubenswrapper[4775]: I0123 14:33:09.871800 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-1"] Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.630594 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.635729 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.723644 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93e53da4-e769-460a-b299-07131d928b83-config-data\") pod \"93e53da4-e769-460a-b299-07131d928b83\" (UID: \"93e53da4-e769-460a-b299-07131d928b83\") " Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.723895 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vthpv\" (UniqueName: \"kubernetes.io/projected/93e53da4-e769-460a-b299-07131d928b83-kube-api-access-vthpv\") pod \"93e53da4-e769-460a-b299-07131d928b83\" (UID: \"93e53da4-e769-460a-b299-07131d928b83\") " Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.724015 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93e53da4-e769-460a-b299-07131d928b83-logs\") pod \"93e53da4-e769-460a-b299-07131d928b83\" (UID: \"93e53da4-e769-460a-b299-07131d928b83\") " Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.724074 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f8451d7-e2c8-4d37-838f-b5042ceabc86-logs\") pod \"1f8451d7-e2c8-4d37-838f-b5042ceabc86\" (UID: \"1f8451d7-e2c8-4d37-838f-b5042ceabc86\") " Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.724141 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8451d7-e2c8-4d37-838f-b5042ceabc86-config-data\") pod \"1f8451d7-e2c8-4d37-838f-b5042ceabc86\" (UID: \"1f8451d7-e2c8-4d37-838f-b5042ceabc86\") " Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.724235 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbjn6\" (UniqueName: \"kubernetes.io/projected/1f8451d7-e2c8-4d37-838f-b5042ceabc86-kube-api-access-xbjn6\") pod \"1f8451d7-e2c8-4d37-838f-b5042ceabc86\" (UID: \"1f8451d7-e2c8-4d37-838f-b5042ceabc86\") " Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.724580 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f8451d7-e2c8-4d37-838f-b5042ceabc86-logs" (OuterVolumeSpecName: "logs") pod "1f8451d7-e2c8-4d37-838f-b5042ceabc86" (UID: "1f8451d7-e2c8-4d37-838f-b5042ceabc86"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.724621 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93e53da4-e769-460a-b299-07131d928b83-logs" (OuterVolumeSpecName: "logs") pod "93e53da4-e769-460a-b299-07131d928b83" (UID: "93e53da4-e769-460a-b299-07131d928b83"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.724961 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93e53da4-e769-460a-b299-07131d928b83-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.724993 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f8451d7-e2c8-4d37-838f-b5042ceabc86-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.729311 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93e53da4-e769-460a-b299-07131d928b83-kube-api-access-vthpv" (OuterVolumeSpecName: "kube-api-access-vthpv") pod "93e53da4-e769-460a-b299-07131d928b83" (UID: "93e53da4-e769-460a-b299-07131d928b83"). InnerVolumeSpecName "kube-api-access-vthpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.729354 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f8451d7-e2c8-4d37-838f-b5042ceabc86-kube-api-access-xbjn6" (OuterVolumeSpecName: "kube-api-access-xbjn6") pod "1f8451d7-e2c8-4d37-838f-b5042ceabc86" (UID: "1f8451d7-e2c8-4d37-838f-b5042ceabc86"). InnerVolumeSpecName "kube-api-access-xbjn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.745668 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93e53da4-e769-460a-b299-07131d928b83-config-data" (OuterVolumeSpecName: "config-data") pod "93e53da4-e769-460a-b299-07131d928b83" (UID: "93e53da4-e769-460a-b299-07131d928b83"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.747946 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f8451d7-e2c8-4d37-838f-b5042ceabc86-config-data" (OuterVolumeSpecName: "config-data") pod "1f8451d7-e2c8-4d37-838f-b5042ceabc86" (UID: "1f8451d7-e2c8-4d37-838f-b5042ceabc86"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.815818 4775 generic.go:334] "Generic (PLEG): container finished" podID="1f8451d7-e2c8-4d37-838f-b5042ceabc86" containerID="99eb7e2e686344d06bacb14b07ca9db3cf66056ae2537d284693419e0f8c15e3" exitCode=0 Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.816223 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-1" event={"ID":"1f8451d7-e2c8-4d37-838f-b5042ceabc86","Type":"ContainerDied","Data":"99eb7e2e686344d06bacb14b07ca9db3cf66056ae2537d284693419e0f8c15e3"} Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.816254 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-1" event={"ID":"1f8451d7-e2c8-4d37-838f-b5042ceabc86","Type":"ContainerDied","Data":"915f8179b0a7a6e696f78019ea25b2951e8b526105585716102ad10d5d921fdc"} Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.816273 4775 scope.go:117] "RemoveContainer" containerID="99eb7e2e686344d06bacb14b07ca9db3cf66056ae2537d284693419e0f8c15e3" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.816380 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-1" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.822947 4775 generic.go:334] "Generic (PLEG): container finished" podID="93e53da4-e769-460a-b299-07131d928b83" containerID="d3a4946fb4d2fe2a9a5683281e70f94df6d1c65d02c5f7cebaeee17f058e4a65" exitCode=0 Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.822977 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-2" event={"ID":"93e53da4-e769-460a-b299-07131d928b83","Type":"ContainerDied","Data":"d3a4946fb4d2fe2a9a5683281e70f94df6d1c65d02c5f7cebaeee17f058e4a65"} Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.822994 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-2" event={"ID":"93e53da4-e769-460a-b299-07131d928b83","Type":"ContainerDied","Data":"60356a4b31a8069b59d95c548c20dccce95f5173efd6d074a66247e83d02c3f3"} Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.823017 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-2" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.825643 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbjn6\" (UniqueName: \"kubernetes.io/projected/1f8451d7-e2c8-4d37-838f-b5042ceabc86-kube-api-access-xbjn6\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.825662 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93e53da4-e769-460a-b299-07131d928b83-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.825673 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vthpv\" (UniqueName: \"kubernetes.io/projected/93e53da4-e769-460a-b299-07131d928b83-kube-api-access-vthpv\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.825685 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f8451d7-e2c8-4d37-838f-b5042ceabc86-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.842248 4775 scope.go:117] "RemoveContainer" containerID="8bb734600e802f925272d19ee91b082bc20a92958621709db8bcda1373be8cd5" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.859989 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-1"] Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.869161 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-1"] Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.912215 4775 scope.go:117] "RemoveContainer" containerID="99eb7e2e686344d06bacb14b07ca9db3cf66056ae2537d284693419e0f8c15e3" Jan 23 14:33:10 crc kubenswrapper[4775]: E0123 14:33:10.912862 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99eb7e2e686344d06bacb14b07ca9db3cf66056ae2537d284693419e0f8c15e3\": container with ID starting with 99eb7e2e686344d06bacb14b07ca9db3cf66056ae2537d284693419e0f8c15e3 not found: ID does not exist" containerID="99eb7e2e686344d06bacb14b07ca9db3cf66056ae2537d284693419e0f8c15e3" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.912911 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99eb7e2e686344d06bacb14b07ca9db3cf66056ae2537d284693419e0f8c15e3"} err="failed to get container status \"99eb7e2e686344d06bacb14b07ca9db3cf66056ae2537d284693419e0f8c15e3\": rpc error: code = NotFound desc = could not find container \"99eb7e2e686344d06bacb14b07ca9db3cf66056ae2537d284693419e0f8c15e3\": container with ID starting with 99eb7e2e686344d06bacb14b07ca9db3cf66056ae2537d284693419e0f8c15e3 not found: ID does not exist" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.912936 4775 scope.go:117] "RemoveContainer" containerID="8bb734600e802f925272d19ee91b082bc20a92958621709db8bcda1373be8cd5" Jan 23 14:33:10 crc kubenswrapper[4775]: E0123 14:33:10.913379 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8bb734600e802f925272d19ee91b082bc20a92958621709db8bcda1373be8cd5\": container with ID starting with 8bb734600e802f925272d19ee91b082bc20a92958621709db8bcda1373be8cd5 not found: ID does not exist" containerID="8bb734600e802f925272d19ee91b082bc20a92958621709db8bcda1373be8cd5" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.913501 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8bb734600e802f925272d19ee91b082bc20a92958621709db8bcda1373be8cd5"} err="failed to get container status \"8bb734600e802f925272d19ee91b082bc20a92958621709db8bcda1373be8cd5\": rpc error: code = NotFound desc = could not find container \"8bb734600e802f925272d19ee91b082bc20a92958621709db8bcda1373be8cd5\": container with ID starting with 8bb734600e802f925272d19ee91b082bc20a92958621709db8bcda1373be8cd5 not found: ID does not exist" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.913606 4775 scope.go:117] "RemoveContainer" containerID="d3a4946fb4d2fe2a9a5683281e70f94df6d1c65d02c5f7cebaeee17f058e4a65" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.915315 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-2"] Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.922369 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-2"] Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.937146 4775 scope.go:117] "RemoveContainer" containerID="a22aeb3b9de2421ab074e1ffccb7be34ee4fb1066458792f0e03da32e3b371bc" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.953401 4775 scope.go:117] "RemoveContainer" containerID="d3a4946fb4d2fe2a9a5683281e70f94df6d1c65d02c5f7cebaeee17f058e4a65" Jan 23 14:33:10 crc kubenswrapper[4775]: E0123 14:33:10.953785 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3a4946fb4d2fe2a9a5683281e70f94df6d1c65d02c5f7cebaeee17f058e4a65\": container with ID starting with d3a4946fb4d2fe2a9a5683281e70f94df6d1c65d02c5f7cebaeee17f058e4a65 not found: ID does not exist" containerID="d3a4946fb4d2fe2a9a5683281e70f94df6d1c65d02c5f7cebaeee17f058e4a65" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.953904 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3a4946fb4d2fe2a9a5683281e70f94df6d1c65d02c5f7cebaeee17f058e4a65"} err="failed to get container status \"d3a4946fb4d2fe2a9a5683281e70f94df6d1c65d02c5f7cebaeee17f058e4a65\": rpc error: code = NotFound desc = could not find container \"d3a4946fb4d2fe2a9a5683281e70f94df6d1c65d02c5f7cebaeee17f058e4a65\": container with ID starting with d3a4946fb4d2fe2a9a5683281e70f94df6d1c65d02c5f7cebaeee17f058e4a65 not found: ID does not exist" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.954004 4775 scope.go:117] "RemoveContainer" containerID="a22aeb3b9de2421ab074e1ffccb7be34ee4fb1066458792f0e03da32e3b371bc" Jan 23 14:33:10 crc kubenswrapper[4775]: E0123 14:33:10.954299 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a22aeb3b9de2421ab074e1ffccb7be34ee4fb1066458792f0e03da32e3b371bc\": container with ID starting with a22aeb3b9de2421ab074e1ffccb7be34ee4fb1066458792f0e03da32e3b371bc not found: ID does not exist" containerID="a22aeb3b9de2421ab074e1ffccb7be34ee4fb1066458792f0e03da32e3b371bc" Jan 23 14:33:10 crc kubenswrapper[4775]: I0123 14:33:10.954408 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a22aeb3b9de2421ab074e1ffccb7be34ee4fb1066458792f0e03da32e3b371bc"} err="failed to get container status \"a22aeb3b9de2421ab074e1ffccb7be34ee4fb1066458792f0e03da32e3b371bc\": rpc error: code = NotFound desc = could not find container \"a22aeb3b9de2421ab074e1ffccb7be34ee4fb1066458792f0e03da32e3b371bc\": container with ID starting with a22aeb3b9de2421ab074e1ffccb7be34ee4fb1066458792f0e03da32e3b371bc not found: ID does not exist" Jan 23 14:33:11 crc kubenswrapper[4775]: I0123 14:33:11.727540 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f8451d7-e2c8-4d37-838f-b5042ceabc86" path="/var/lib/kubelet/pods/1f8451d7-e2c8-4d37-838f-b5042ceabc86/volumes" Jan 23 14:33:11 crc kubenswrapper[4775]: I0123 14:33:11.728797 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b" path="/var/lib/kubelet/pods/899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b/volumes" Jan 23 14:33:11 crc kubenswrapper[4775]: I0123 14:33:11.729837 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93e53da4-e769-460a-b299-07131d928b83" path="/var/lib/kubelet/pods/93e53da4-e769-460a-b299-07131d928b83/volumes" Jan 23 14:33:11 crc kubenswrapper[4775]: I0123 14:33:11.732418 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd9699c7-620b-45ed-9acf-d8d68558592a" path="/var/lib/kubelet/pods/cd9699c7-620b-45ed-9acf-d8d68558592a/volumes" Jan 23 14:33:19 crc kubenswrapper[4775]: I0123 14:33:19.714301 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:33:19 crc kubenswrapper[4775]: E0123 14:33:19.715462 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:33:24 crc kubenswrapper[4775]: I0123 14:33:24.364128 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:33:24 crc kubenswrapper[4775]: I0123 14:33:24.365016 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="8da8e70a-bee6-4082-a0c5-8419ea3f86a6" containerName="nova-kuttl-api-log" containerID="cri-o://c0f199e96e42ee98742c70e0f678217496127272f948f51e4ea5ea7a1c513f05" gracePeriod=30 Jan 23 14:33:24 crc kubenswrapper[4775]: I0123 14:33:24.365173 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="8da8e70a-bee6-4082-a0c5-8419ea3f86a6" containerName="nova-kuttl-api-api" containerID="cri-o://03dae20f5ec29320c7fe34119020ccbc13c7cae126690fd030e309307e495762" gracePeriod=30 Jan 23 14:33:24 crc kubenswrapper[4775]: I0123 14:33:24.729065 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:33:24 crc kubenswrapper[4775]: I0123 14:33:24.729580 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podUID="84473a0d-a6e7-41ab-8b88-07b8ed888950" containerName="nova-kuttl-cell0-conductor-conductor" containerID="cri-o://dc374e41b812f145b9a3d5437aa30440decff971ec9b42763a18a56b3992b678" gracePeriod=30 Jan 23 14:33:24 crc kubenswrapper[4775]: I0123 14:33:24.969342 4775 generic.go:334] "Generic (PLEG): container finished" podID="8da8e70a-bee6-4082-a0c5-8419ea3f86a6" containerID="c0f199e96e42ee98742c70e0f678217496127272f948f51e4ea5ea7a1c513f05" exitCode=143 Jan 23 14:33:24 crc kubenswrapper[4775]: I0123 14:33:24.969395 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"8da8e70a-bee6-4082-a0c5-8419ea3f86a6","Type":"ContainerDied","Data":"c0f199e96e42ee98742c70e0f678217496127272f948f51e4ea5ea7a1c513f05"} Jan 23 14:33:25 crc kubenswrapper[4775]: E0123 14:33:25.748986 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dc374e41b812f145b9a3d5437aa30440decff971ec9b42763a18a56b3992b678" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 23 14:33:25 crc kubenswrapper[4775]: E0123 14:33:25.751757 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dc374e41b812f145b9a3d5437aa30440decff971ec9b42763a18a56b3992b678" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 23 14:33:25 crc kubenswrapper[4775]: E0123 14:33:25.754397 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dc374e41b812f145b9a3d5437aa30440decff971ec9b42763a18a56b3992b678" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 23 14:33:25 crc kubenswrapper[4775]: E0123 14:33:25.754529 4775 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podUID="84473a0d-a6e7-41ab-8b88-07b8ed888950" containerName="nova-kuttl-cell0-conductor-conductor" Jan 23 14:33:27 crc kubenswrapper[4775]: I0123 14:33:27.946592 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:33:28 crc kubenswrapper[4775]: I0123 14:33:28.015695 4775 generic.go:334] "Generic (PLEG): container finished" podID="8da8e70a-bee6-4082-a0c5-8419ea3f86a6" containerID="03dae20f5ec29320c7fe34119020ccbc13c7cae126690fd030e309307e495762" exitCode=0 Jan 23 14:33:28 crc kubenswrapper[4775]: I0123 14:33:28.015737 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"8da8e70a-bee6-4082-a0c5-8419ea3f86a6","Type":"ContainerDied","Data":"03dae20f5ec29320c7fe34119020ccbc13c7cae126690fd030e309307e495762"} Jan 23 14:33:28 crc kubenswrapper[4775]: I0123 14:33:28.015762 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"8da8e70a-bee6-4082-a0c5-8419ea3f86a6","Type":"ContainerDied","Data":"5bbd58bc5eb6780b68e8d968266f41a0b7126273d93210d99f32930850e03151"} Jan 23 14:33:28 crc kubenswrapper[4775]: I0123 14:33:28.015781 4775 scope.go:117] "RemoveContainer" containerID="03dae20f5ec29320c7fe34119020ccbc13c7cae126690fd030e309307e495762" Jan 23 14:33:28 crc kubenswrapper[4775]: I0123 14:33:28.015961 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:33:28 crc kubenswrapper[4775]: I0123 14:33:28.044334 4775 scope.go:117] "RemoveContainer" containerID="c0f199e96e42ee98742c70e0f678217496127272f948f51e4ea5ea7a1c513f05" Jan 23 14:33:28 crc kubenswrapper[4775]: I0123 14:33:28.077773 4775 scope.go:117] "RemoveContainer" containerID="03dae20f5ec29320c7fe34119020ccbc13c7cae126690fd030e309307e495762" Jan 23 14:33:28 crc kubenswrapper[4775]: E0123 14:33:28.078367 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03dae20f5ec29320c7fe34119020ccbc13c7cae126690fd030e309307e495762\": container with ID starting with 03dae20f5ec29320c7fe34119020ccbc13c7cae126690fd030e309307e495762 not found: ID does not exist" containerID="03dae20f5ec29320c7fe34119020ccbc13c7cae126690fd030e309307e495762" Jan 23 14:33:28 crc kubenswrapper[4775]: I0123 14:33:28.078438 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03dae20f5ec29320c7fe34119020ccbc13c7cae126690fd030e309307e495762"} err="failed to get container status \"03dae20f5ec29320c7fe34119020ccbc13c7cae126690fd030e309307e495762\": rpc error: code = NotFound desc = could not find container \"03dae20f5ec29320c7fe34119020ccbc13c7cae126690fd030e309307e495762\": container with ID starting with 03dae20f5ec29320c7fe34119020ccbc13c7cae126690fd030e309307e495762 not found: ID does not exist" Jan 23 14:33:28 crc kubenswrapper[4775]: I0123 14:33:28.078472 4775 scope.go:117] "RemoveContainer" containerID="c0f199e96e42ee98742c70e0f678217496127272f948f51e4ea5ea7a1c513f05" Jan 23 14:33:28 crc kubenswrapper[4775]: E0123 14:33:28.078956 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0f199e96e42ee98742c70e0f678217496127272f948f51e4ea5ea7a1c513f05\": container with ID starting with c0f199e96e42ee98742c70e0f678217496127272f948f51e4ea5ea7a1c513f05 not found: ID does not exist" containerID="c0f199e96e42ee98742c70e0f678217496127272f948f51e4ea5ea7a1c513f05" Jan 23 14:33:28 crc kubenswrapper[4775]: I0123 14:33:28.078997 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0f199e96e42ee98742c70e0f678217496127272f948f51e4ea5ea7a1c513f05"} err="failed to get container status \"c0f199e96e42ee98742c70e0f678217496127272f948f51e4ea5ea7a1c513f05\": rpc error: code = NotFound desc = could not find container \"c0f199e96e42ee98742c70e0f678217496127272f948f51e4ea5ea7a1c513f05\": container with ID starting with c0f199e96e42ee98742c70e0f678217496127272f948f51e4ea5ea7a1c513f05 not found: ID does not exist" Jan 23 14:33:28 crc kubenswrapper[4775]: I0123 14:33:28.137528 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8da8e70a-bee6-4082-a0c5-8419ea3f86a6-config-data\") pod \"8da8e70a-bee6-4082-a0c5-8419ea3f86a6\" (UID: \"8da8e70a-bee6-4082-a0c5-8419ea3f86a6\") " Jan 23 14:33:28 crc kubenswrapper[4775]: I0123 14:33:28.137580 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8da8e70a-bee6-4082-a0c5-8419ea3f86a6-logs\") pod \"8da8e70a-bee6-4082-a0c5-8419ea3f86a6\" (UID: \"8da8e70a-bee6-4082-a0c5-8419ea3f86a6\") " Jan 23 14:33:28 crc kubenswrapper[4775]: I0123 14:33:28.137763 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7gh9\" (UniqueName: \"kubernetes.io/projected/8da8e70a-bee6-4082-a0c5-8419ea3f86a6-kube-api-access-c7gh9\") pod \"8da8e70a-bee6-4082-a0c5-8419ea3f86a6\" (UID: \"8da8e70a-bee6-4082-a0c5-8419ea3f86a6\") " Jan 23 14:33:28 crc kubenswrapper[4775]: I0123 14:33:28.138608 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8da8e70a-bee6-4082-a0c5-8419ea3f86a6-logs" (OuterVolumeSpecName: "logs") pod "8da8e70a-bee6-4082-a0c5-8419ea3f86a6" (UID: "8da8e70a-bee6-4082-a0c5-8419ea3f86a6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:33:28 crc kubenswrapper[4775]: I0123 14:33:28.143500 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8da8e70a-bee6-4082-a0c5-8419ea3f86a6-kube-api-access-c7gh9" (OuterVolumeSpecName: "kube-api-access-c7gh9") pod "8da8e70a-bee6-4082-a0c5-8419ea3f86a6" (UID: "8da8e70a-bee6-4082-a0c5-8419ea3f86a6"). InnerVolumeSpecName "kube-api-access-c7gh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:28 crc kubenswrapper[4775]: I0123 14:33:28.164010 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8da8e70a-bee6-4082-a0c5-8419ea3f86a6-config-data" (OuterVolumeSpecName: "config-data") pod "8da8e70a-bee6-4082-a0c5-8419ea3f86a6" (UID: "8da8e70a-bee6-4082-a0c5-8419ea3f86a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:33:28 crc kubenswrapper[4775]: I0123 14:33:28.241005 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7gh9\" (UniqueName: \"kubernetes.io/projected/8da8e70a-bee6-4082-a0c5-8419ea3f86a6-kube-api-access-c7gh9\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:28 crc kubenswrapper[4775]: I0123 14:33:28.241055 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8da8e70a-bee6-4082-a0c5-8419ea3f86a6-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:28 crc kubenswrapper[4775]: I0123 14:33:28.241075 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8da8e70a-bee6-4082-a0c5-8419ea3f86a6-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:28 crc kubenswrapper[4775]: I0123 14:33:28.359606 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:33:28 crc kubenswrapper[4775]: I0123 14:33:28.371270 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:33:29 crc kubenswrapper[4775]: I0123 14:33:29.727915 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8da8e70a-bee6-4082-a0c5-8419ea3f86a6" path="/var/lib/kubelet/pods/8da8e70a-bee6-4082-a0c5-8419ea3f86a6/volumes" Jan 23 14:33:29 crc kubenswrapper[4775]: I0123 14:33:29.854738 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:33:29 crc kubenswrapper[4775]: I0123 14:33:29.989279 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84473a0d-a6e7-41ab-8b88-07b8ed888950-config-data\") pod \"84473a0d-a6e7-41ab-8b88-07b8ed888950\" (UID: \"84473a0d-a6e7-41ab-8b88-07b8ed888950\") " Jan 23 14:33:29 crc kubenswrapper[4775]: I0123 14:33:29.989435 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26pl\" (UniqueName: \"kubernetes.io/projected/84473a0d-a6e7-41ab-8b88-07b8ed888950-kube-api-access-m26pl\") pod \"84473a0d-a6e7-41ab-8b88-07b8ed888950\" (UID: \"84473a0d-a6e7-41ab-8b88-07b8ed888950\") " Jan 23 14:33:30 crc kubenswrapper[4775]: I0123 14:33:30.001150 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84473a0d-a6e7-41ab-8b88-07b8ed888950-kube-api-access-m26pl" (OuterVolumeSpecName: "kube-api-access-m26pl") pod "84473a0d-a6e7-41ab-8b88-07b8ed888950" (UID: "84473a0d-a6e7-41ab-8b88-07b8ed888950"). InnerVolumeSpecName "kube-api-access-m26pl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:30 crc kubenswrapper[4775]: I0123 14:33:30.033322 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84473a0d-a6e7-41ab-8b88-07b8ed888950-config-data" (OuterVolumeSpecName: "config-data") pod "84473a0d-a6e7-41ab-8b88-07b8ed888950" (UID: "84473a0d-a6e7-41ab-8b88-07b8ed888950"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:33:30 crc kubenswrapper[4775]: I0123 14:33:30.041268 4775 generic.go:334] "Generic (PLEG): container finished" podID="84473a0d-a6e7-41ab-8b88-07b8ed888950" containerID="dc374e41b812f145b9a3d5437aa30440decff971ec9b42763a18a56b3992b678" exitCode=0 Jan 23 14:33:30 crc kubenswrapper[4775]: I0123 14:33:30.041331 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"84473a0d-a6e7-41ab-8b88-07b8ed888950","Type":"ContainerDied","Data":"dc374e41b812f145b9a3d5437aa30440decff971ec9b42763a18a56b3992b678"} Jan 23 14:33:30 crc kubenswrapper[4775]: I0123 14:33:30.041369 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"84473a0d-a6e7-41ab-8b88-07b8ed888950","Type":"ContainerDied","Data":"b44ad7319eff2652d4ad8fadab672eed48adfae26f3c8e4cc8c6eb5f3b5d2bc0"} Jan 23 14:33:30 crc kubenswrapper[4775]: I0123 14:33:30.041403 4775 scope.go:117] "RemoveContainer" containerID="dc374e41b812f145b9a3d5437aa30440decff971ec9b42763a18a56b3992b678" Jan 23 14:33:30 crc kubenswrapper[4775]: I0123 14:33:30.041553 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:33:30 crc kubenswrapper[4775]: I0123 14:33:30.074970 4775 scope.go:117] "RemoveContainer" containerID="dc374e41b812f145b9a3d5437aa30440decff971ec9b42763a18a56b3992b678" Jan 23 14:33:30 crc kubenswrapper[4775]: E0123 14:33:30.075516 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc374e41b812f145b9a3d5437aa30440decff971ec9b42763a18a56b3992b678\": container with ID starting with dc374e41b812f145b9a3d5437aa30440decff971ec9b42763a18a56b3992b678 not found: ID does not exist" containerID="dc374e41b812f145b9a3d5437aa30440decff971ec9b42763a18a56b3992b678" Jan 23 14:33:30 crc kubenswrapper[4775]: I0123 14:33:30.075599 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc374e41b812f145b9a3d5437aa30440decff971ec9b42763a18a56b3992b678"} err="failed to get container status \"dc374e41b812f145b9a3d5437aa30440decff971ec9b42763a18a56b3992b678\": rpc error: code = NotFound desc = could not find container \"dc374e41b812f145b9a3d5437aa30440decff971ec9b42763a18a56b3992b678\": container with ID starting with dc374e41b812f145b9a3d5437aa30440decff971ec9b42763a18a56b3992b678 not found: ID does not exist" Jan 23 14:33:30 crc kubenswrapper[4775]: I0123 14:33:30.092613 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84473a0d-a6e7-41ab-8b88-07b8ed888950-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:30 crc kubenswrapper[4775]: I0123 14:33:30.092662 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m26pl\" (UniqueName: \"kubernetes.io/projected/84473a0d-a6e7-41ab-8b88-07b8ed888950-kube-api-access-m26pl\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:30 crc kubenswrapper[4775]: I0123 14:33:30.102788 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:33:30 crc kubenswrapper[4775]: I0123 14:33:30.110235 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:33:30 crc kubenswrapper[4775]: I0123 14:33:30.449317 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:33:30 crc kubenswrapper[4775]: I0123 14:33:30.449579 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="daaf7413-398a-4a39-a375-c130187f9726" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://3ba5fc19235d3db712a04f428f14e623c0a46cd37e971af89d028a76dc93187a" gracePeriod=30 Jan 23 14:33:30 crc kubenswrapper[4775]: I0123 14:33:30.570686 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:33:30 crc kubenswrapper[4775]: I0123 14:33:30.571253 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="08cc29e8-1d83-4f1e-b343-a813a06c7f5a" containerName="nova-kuttl-metadata-log" containerID="cri-o://64ad254d6ba4ee3740ce23f48d5a83bfdac9d38cd1e51e005d44e141074beaa9" gracePeriod=30 Jan 23 14:33:30 crc kubenswrapper[4775]: I0123 14:33:30.573046 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="08cc29e8-1d83-4f1e-b343-a813a06c7f5a" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://2ec2d8ee517098a55339c83b7adf972f94f667aba8e7519f92926f2a080db62e" gracePeriod=30 Jan 23 14:33:30 crc kubenswrapper[4775]: I0123 14:33:30.733260 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:33:30 crc kubenswrapper[4775]: I0123 14:33:30.733453 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podUID="4e279d5d-df37-483b-9bc7-682b48b2dbc4" containerName="nova-kuttl-cell1-conductor-conductor" containerID="cri-o://e4096d3b7888413c8e0420a378fc8bb781cb9864846833a4e649d155b711ef1a" gracePeriod=30 Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.013230 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt"] Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.019572 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-vtvrt"] Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.028384 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf"] Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.034448 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-lnndf"] Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.051155 4775 generic.go:334] "Generic (PLEG): container finished" podID="08cc29e8-1d83-4f1e-b343-a813a06c7f5a" containerID="64ad254d6ba4ee3740ce23f48d5a83bfdac9d38cd1e51e005d44e141074beaa9" exitCode=143 Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.051204 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"08cc29e8-1d83-4f1e-b343-a813a06c7f5a","Type":"ContainerDied","Data":"64ad254d6ba4ee3740ce23f48d5a83bfdac9d38cd1e51e005d44e141074beaa9"} Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.054974 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novacell06ec2-account-delete-t28fh"] Jan 23 14:33:31 crc kubenswrapper[4775]: E0123 14:33:31.055291 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec05960b-b36c-408b-af7e-3b5b312882fc" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055312 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec05960b-b36c-408b-af7e-3b5b312882fc" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:33:31 crc kubenswrapper[4775]: E0123 14:33:31.055328 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3429d990-e795-4241-bb25-8871be747a75" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055337 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="3429d990-e795-4241-bb25-8871be747a75" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:33:31 crc kubenswrapper[4775]: E0123 14:33:31.055346 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a771c767-804b-4c42-bfc9-e6982acea366" containerName="nova-kuttl-api-api" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055352 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="a771c767-804b-4c42-bfc9-e6982acea366" containerName="nova-kuttl-api-api" Jan 23 14:33:31 crc kubenswrapper[4775]: E0123 14:33:31.055358 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f8451d7-e2c8-4d37-838f-b5042ceabc86" containerName="nova-kuttl-metadata-metadata" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055364 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f8451d7-e2c8-4d37-838f-b5042ceabc86" containerName="nova-kuttl-metadata-metadata" Jan 23 14:33:31 crc kubenswrapper[4775]: E0123 14:33:31.055375 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a771c767-804b-4c42-bfc9-e6982acea366" containerName="nova-kuttl-api-log" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055381 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="a771c767-804b-4c42-bfc9-e6982acea366" containerName="nova-kuttl-api-log" Jan 23 14:33:31 crc kubenswrapper[4775]: E0123 14:33:31.055392 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93e53da4-e769-460a-b299-07131d928b83" containerName="nova-kuttl-metadata-metadata" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055397 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="93e53da4-e769-460a-b299-07131d928b83" containerName="nova-kuttl-metadata-metadata" Jan 23 14:33:31 crc kubenswrapper[4775]: E0123 14:33:31.055408 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3a307d6-651f-4f43-83ec-6d1e1118f7ad" containerName="nova-kuttl-api-api" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055414 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3a307d6-651f-4f43-83ec-6d1e1118f7ad" containerName="nova-kuttl-api-api" Jan 23 14:33:31 crc kubenswrapper[4775]: E0123 14:33:31.055426 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3a307d6-651f-4f43-83ec-6d1e1118f7ad" containerName="nova-kuttl-api-log" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055431 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3a307d6-651f-4f43-83ec-6d1e1118f7ad" containerName="nova-kuttl-api-log" Jan 23 14:33:31 crc kubenswrapper[4775]: E0123 14:33:31.055440 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="422f57ad-3c24-4af9-aa50-c17639a07403" containerName="nova-kuttl-cell0-conductor-conductor" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055448 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="422f57ad-3c24-4af9-aa50-c17639a07403" containerName="nova-kuttl-cell0-conductor-conductor" Jan 23 14:33:31 crc kubenswrapper[4775]: E0123 14:33:31.055458 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8da8e70a-bee6-4082-a0c5-8419ea3f86a6" containerName="nova-kuttl-api-api" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055464 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="8da8e70a-bee6-4082-a0c5-8419ea3f86a6" containerName="nova-kuttl-api-api" Jan 23 14:33:31 crc kubenswrapper[4775]: E0123 14:33:31.055474 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93184515-7dbf-4aeb-823f-0146b2a66d39" containerName="nova-kuttl-cell0-conductor-conductor" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055480 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="93184515-7dbf-4aeb-823f-0146b2a66d39" containerName="nova-kuttl-cell0-conductor-conductor" Jan 23 14:33:31 crc kubenswrapper[4775]: E0123 14:33:31.055491 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84473a0d-a6e7-41ab-8b88-07b8ed888950" containerName="nova-kuttl-cell0-conductor-conductor" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055497 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="84473a0d-a6e7-41ab-8b88-07b8ed888950" containerName="nova-kuttl-cell0-conductor-conductor" Jan 23 14:33:31 crc kubenswrapper[4775]: E0123 14:33:31.055508 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b" containerName="nova-kuttl-cell1-conductor-conductor" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055513 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b" containerName="nova-kuttl-cell1-conductor-conductor" Jan 23 14:33:31 crc kubenswrapper[4775]: E0123 14:33:31.055522 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f8451d7-e2c8-4d37-838f-b5042ceabc86" containerName="nova-kuttl-metadata-log" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055528 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f8451d7-e2c8-4d37-838f-b5042ceabc86" containerName="nova-kuttl-metadata-log" Jan 23 14:33:31 crc kubenswrapper[4775]: E0123 14:33:31.055539 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9699c7-620b-45ed-9acf-d8d68558592a" containerName="nova-kuttl-cell1-conductor-conductor" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055545 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9699c7-620b-45ed-9acf-d8d68558592a" containerName="nova-kuttl-cell1-conductor-conductor" Jan 23 14:33:31 crc kubenswrapper[4775]: E0123 14:33:31.055555 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93e53da4-e769-460a-b299-07131d928b83" containerName="nova-kuttl-metadata-log" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055561 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="93e53da4-e769-460a-b299-07131d928b83" containerName="nova-kuttl-metadata-log" Jan 23 14:33:31 crc kubenswrapper[4775]: E0123 14:33:31.055570 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8da8e70a-bee6-4082-a0c5-8419ea3f86a6" containerName="nova-kuttl-api-log" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055575 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="8da8e70a-bee6-4082-a0c5-8419ea3f86a6" containerName="nova-kuttl-api-log" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055704 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="899eaf4f-9baf-4a85-888f-a5e9ed8bcf2b" containerName="nova-kuttl-cell1-conductor-conductor" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055712 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="93184515-7dbf-4aeb-823f-0146b2a66d39" containerName="nova-kuttl-cell0-conductor-conductor" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055719 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="3429d990-e795-4241-bb25-8871be747a75" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055728 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="93e53da4-e769-460a-b299-07131d928b83" containerName="nova-kuttl-metadata-log" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055738 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f8451d7-e2c8-4d37-838f-b5042ceabc86" containerName="nova-kuttl-metadata-log" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055748 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3a307d6-651f-4f43-83ec-6d1e1118f7ad" containerName="nova-kuttl-api-log" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055755 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f8451d7-e2c8-4d37-838f-b5042ceabc86" containerName="nova-kuttl-metadata-metadata" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055763 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="8da8e70a-bee6-4082-a0c5-8419ea3f86a6" containerName="nova-kuttl-api-log" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055771 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="93e53da4-e769-460a-b299-07131d928b83" containerName="nova-kuttl-metadata-metadata" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055780 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="84473a0d-a6e7-41ab-8b88-07b8ed888950" containerName="nova-kuttl-cell0-conductor-conductor" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055787 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="a771c767-804b-4c42-bfc9-e6982acea366" containerName="nova-kuttl-api-api" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055795 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="8da8e70a-bee6-4082-a0c5-8419ea3f86a6" containerName="nova-kuttl-api-api" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055818 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="a771c767-804b-4c42-bfc9-e6982acea366" containerName="nova-kuttl-api-log" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055826 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="422f57ad-3c24-4af9-aa50-c17639a07403" containerName="nova-kuttl-cell0-conductor-conductor" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055835 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3a307d6-651f-4f43-83ec-6d1e1118f7ad" containerName="nova-kuttl-api-api" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055842 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec05960b-b36c-408b-af7e-3b5b312882fc" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.055851 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd9699c7-620b-45ed-9acf-d8d68558592a" containerName="nova-kuttl-cell1-conductor-conductor" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.056335 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell06ec2-account-delete-t28fh" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.065496 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell06ec2-account-delete-t28fh"] Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.102945 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novacell1ba32-account-delete-hdrb4"] Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.103879 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell1ba32-account-delete-hdrb4" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.124631 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell1ba32-account-delete-hdrb4"] Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.164968 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novaapi9a1c-account-delete-8fps4"] Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.173306 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapi9a1c-account-delete-8fps4" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.175903 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novaapi9a1c-account-delete-8fps4"] Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.208604 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mfzx\" (UniqueName: \"kubernetes.io/projected/3f14b26a-2160-432f-a6cf-f3fab1f31afc-kube-api-access-9mfzx\") pod \"novacell06ec2-account-delete-t28fh\" (UID: \"3f14b26a-2160-432f-a6cf-f3fab1f31afc\") " pod="nova-kuttl-default/novacell06ec2-account-delete-t28fh" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.208646 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ls2l\" (UniqueName: \"kubernetes.io/projected/c25c2a05-1d9b-4551-9c92-f04da2897895-kube-api-access-4ls2l\") pod \"novacell1ba32-account-delete-hdrb4\" (UID: \"c25c2a05-1d9b-4551-9c92-f04da2897895\") " pod="nova-kuttl-default/novacell1ba32-account-delete-hdrb4" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.208733 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f14b26a-2160-432f-a6cf-f3fab1f31afc-operator-scripts\") pod \"novacell06ec2-account-delete-t28fh\" (UID: \"3f14b26a-2160-432f-a6cf-f3fab1f31afc\") " pod="nova-kuttl-default/novacell06ec2-account-delete-t28fh" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.208785 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c25c2a05-1d9b-4551-9c92-f04da2897895-operator-scripts\") pod \"novacell1ba32-account-delete-hdrb4\" (UID: \"c25c2a05-1d9b-4551-9c92-f04da2897895\") " pod="nova-kuttl-default/novacell1ba32-account-delete-hdrb4" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.291101 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.291298 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podUID="5a2ad7dd-d80c-4eb4-8531-c2a8208bb760" containerName="nova-kuttl-cell1-novncproxy-novncproxy" containerID="cri-o://b3037b72f855e3514727ac579826433af99bcec07db67273c699c91b0c386a1b" gracePeriod=30 Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.310339 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mfzx\" (UniqueName: \"kubernetes.io/projected/3f14b26a-2160-432f-a6cf-f3fab1f31afc-kube-api-access-9mfzx\") pod \"novacell06ec2-account-delete-t28fh\" (UID: \"3f14b26a-2160-432f-a6cf-f3fab1f31afc\") " pod="nova-kuttl-default/novacell06ec2-account-delete-t28fh" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.310408 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ls2l\" (UniqueName: \"kubernetes.io/projected/c25c2a05-1d9b-4551-9c92-f04da2897895-kube-api-access-4ls2l\") pod \"novacell1ba32-account-delete-hdrb4\" (UID: \"c25c2a05-1d9b-4551-9c92-f04da2897895\") " pod="nova-kuttl-default/novacell1ba32-account-delete-hdrb4" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.310432 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l274z\" (UniqueName: \"kubernetes.io/projected/66eb744b-ea4a-4973-8492-2d652c20c447-kube-api-access-l274z\") pod \"novaapi9a1c-account-delete-8fps4\" (UID: \"66eb744b-ea4a-4973-8492-2d652c20c447\") " pod="nova-kuttl-default/novaapi9a1c-account-delete-8fps4" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.310618 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66eb744b-ea4a-4973-8492-2d652c20c447-operator-scripts\") pod \"novaapi9a1c-account-delete-8fps4\" (UID: \"66eb744b-ea4a-4973-8492-2d652c20c447\") " pod="nova-kuttl-default/novaapi9a1c-account-delete-8fps4" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.310696 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f14b26a-2160-432f-a6cf-f3fab1f31afc-operator-scripts\") pod \"novacell06ec2-account-delete-t28fh\" (UID: \"3f14b26a-2160-432f-a6cf-f3fab1f31afc\") " pod="nova-kuttl-default/novacell06ec2-account-delete-t28fh" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.310767 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c25c2a05-1d9b-4551-9c92-f04da2897895-operator-scripts\") pod \"novacell1ba32-account-delete-hdrb4\" (UID: \"c25c2a05-1d9b-4551-9c92-f04da2897895\") " pod="nova-kuttl-default/novacell1ba32-account-delete-hdrb4" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.311443 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c25c2a05-1d9b-4551-9c92-f04da2897895-operator-scripts\") pod \"novacell1ba32-account-delete-hdrb4\" (UID: \"c25c2a05-1d9b-4551-9c92-f04da2897895\") " pod="nova-kuttl-default/novacell1ba32-account-delete-hdrb4" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.311495 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f14b26a-2160-432f-a6cf-f3fab1f31afc-operator-scripts\") pod \"novacell06ec2-account-delete-t28fh\" (UID: \"3f14b26a-2160-432f-a6cf-f3fab1f31afc\") " pod="nova-kuttl-default/novacell06ec2-account-delete-t28fh" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.330386 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mfzx\" (UniqueName: \"kubernetes.io/projected/3f14b26a-2160-432f-a6cf-f3fab1f31afc-kube-api-access-9mfzx\") pod \"novacell06ec2-account-delete-t28fh\" (UID: \"3f14b26a-2160-432f-a6cf-f3fab1f31afc\") " pod="nova-kuttl-default/novacell06ec2-account-delete-t28fh" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.331252 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ls2l\" (UniqueName: \"kubernetes.io/projected/c25c2a05-1d9b-4551-9c92-f04da2897895-kube-api-access-4ls2l\") pod \"novacell1ba32-account-delete-hdrb4\" (UID: \"c25c2a05-1d9b-4551-9c92-f04da2897895\") " pod="nova-kuttl-default/novacell1ba32-account-delete-hdrb4" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.372144 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell06ec2-account-delete-t28fh" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.411943 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l274z\" (UniqueName: \"kubernetes.io/projected/66eb744b-ea4a-4973-8492-2d652c20c447-kube-api-access-l274z\") pod \"novaapi9a1c-account-delete-8fps4\" (UID: \"66eb744b-ea4a-4973-8492-2d652c20c447\") " pod="nova-kuttl-default/novaapi9a1c-account-delete-8fps4" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.412006 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66eb744b-ea4a-4973-8492-2d652c20c447-operator-scripts\") pod \"novaapi9a1c-account-delete-8fps4\" (UID: \"66eb744b-ea4a-4973-8492-2d652c20c447\") " pod="nova-kuttl-default/novaapi9a1c-account-delete-8fps4" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.412911 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66eb744b-ea4a-4973-8492-2d652c20c447-operator-scripts\") pod \"novaapi9a1c-account-delete-8fps4\" (UID: \"66eb744b-ea4a-4973-8492-2d652c20c447\") " pod="nova-kuttl-default/novaapi9a1c-account-delete-8fps4" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.420439 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell1ba32-account-delete-hdrb4" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.434900 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l274z\" (UniqueName: \"kubernetes.io/projected/66eb744b-ea4a-4973-8492-2d652c20c447-kube-api-access-l274z\") pod \"novaapi9a1c-account-delete-8fps4\" (UID: \"66eb744b-ea4a-4973-8492-2d652c20c447\") " pod="nova-kuttl-default/novaapi9a1c-account-delete-8fps4" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.492078 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapi9a1c-account-delete-8fps4" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.715981 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:33:31 crc kubenswrapper[4775]: E0123 14:33:31.716587 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.729961 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84473a0d-a6e7-41ab-8b88-07b8ed888950" path="/var/lib/kubelet/pods/84473a0d-a6e7-41ab-8b88-07b8ed888950/volumes" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.731028 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc9f9b55-ea71-4396-82bf-2a49788ccc42" path="/var/lib/kubelet/pods/bc9f9b55-ea71-4396-82bf-2a49788ccc42/volumes" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.731769 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f751d2a1-4497-4fb2-9c13-af54db584a48" path="/var/lib/kubelet/pods/f751d2a1-4497-4fb2-9c13-af54db584a48/volumes" Jan 23 14:33:31 crc kubenswrapper[4775]: I0123 14:33:31.977494 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell06ec2-account-delete-t28fh"] Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.051005 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell1ba32-account-delete-hdrb4"] Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.062620 4775 generic.go:334] "Generic (PLEG): container finished" podID="5a2ad7dd-d80c-4eb4-8531-c2a8208bb760" containerID="b3037b72f855e3514727ac579826433af99bcec07db67273c699c91b0c386a1b" exitCode=0 Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.062676 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"5a2ad7dd-d80c-4eb4-8531-c2a8208bb760","Type":"ContainerDied","Data":"b3037b72f855e3514727ac579826433af99bcec07db67273c699c91b0c386a1b"} Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.066673 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell06ec2-account-delete-t28fh" event={"ID":"3f14b26a-2160-432f-a6cf-f3fab1f31afc","Type":"ContainerStarted","Data":"d0b63f0b8cc603dfbd347c0bc24572e8875a9ad0337253a29c42786093964643"} Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.143162 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novaapi9a1c-account-delete-8fps4"] Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.166998 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.241001 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kpdm\" (UniqueName: \"kubernetes.io/projected/5a2ad7dd-d80c-4eb4-8531-c2a8208bb760-kube-api-access-7kpdm\") pod \"5a2ad7dd-d80c-4eb4-8531-c2a8208bb760\" (UID: \"5a2ad7dd-d80c-4eb4-8531-c2a8208bb760\") " Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.241487 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a2ad7dd-d80c-4eb4-8531-c2a8208bb760-config-data\") pod \"5a2ad7dd-d80c-4eb4-8531-c2a8208bb760\" (UID: \"5a2ad7dd-d80c-4eb4-8531-c2a8208bb760\") " Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.254975 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a2ad7dd-d80c-4eb4-8531-c2a8208bb760-kube-api-access-7kpdm" (OuterVolumeSpecName: "kube-api-access-7kpdm") pod "5a2ad7dd-d80c-4eb4-8531-c2a8208bb760" (UID: "5a2ad7dd-d80c-4eb4-8531-c2a8208bb760"). InnerVolumeSpecName "kube-api-access-7kpdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.270246 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a2ad7dd-d80c-4eb4-8531-c2a8208bb760-config-data" (OuterVolumeSpecName: "config-data") pod "5a2ad7dd-d80c-4eb4-8531-c2a8208bb760" (UID: "5a2ad7dd-d80c-4eb4-8531-c2a8208bb760"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.344164 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kpdm\" (UniqueName: \"kubernetes.io/projected/5a2ad7dd-d80c-4eb4-8531-c2a8208bb760-kube-api-access-7kpdm\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.344194 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a2ad7dd-d80c-4eb4-8531-c2a8208bb760-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.432725 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.444865 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5zwr\" (UniqueName: \"kubernetes.io/projected/daaf7413-398a-4a39-a375-c130187f9726-kube-api-access-r5zwr\") pod \"daaf7413-398a-4a39-a375-c130187f9726\" (UID: \"daaf7413-398a-4a39-a375-c130187f9726\") " Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.444925 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/daaf7413-398a-4a39-a375-c130187f9726-config-data\") pod \"daaf7413-398a-4a39-a375-c130187f9726\" (UID: \"daaf7413-398a-4a39-a375-c130187f9726\") " Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.448726 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/daaf7413-398a-4a39-a375-c130187f9726-kube-api-access-r5zwr" (OuterVolumeSpecName: "kube-api-access-r5zwr") pod "daaf7413-398a-4a39-a375-c130187f9726" (UID: "daaf7413-398a-4a39-a375-c130187f9726"). InnerVolumeSpecName "kube-api-access-r5zwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.466024 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daaf7413-398a-4a39-a375-c130187f9726-config-data" (OuterVolumeSpecName: "config-data") pod "daaf7413-398a-4a39-a375-c130187f9726" (UID: "daaf7413-398a-4a39-a375-c130187f9726"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.546368 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5zwr\" (UniqueName: \"kubernetes.io/projected/daaf7413-398a-4a39-a375-c130187f9726-kube-api-access-r5zwr\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.546402 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/daaf7413-398a-4a39-a375-c130187f9726-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.716346 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.747793 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7tvz\" (UniqueName: \"kubernetes.io/projected/4e279d5d-df37-483b-9bc7-682b48b2dbc4-kube-api-access-c7tvz\") pod \"4e279d5d-df37-483b-9bc7-682b48b2dbc4\" (UID: \"4e279d5d-df37-483b-9bc7-682b48b2dbc4\") " Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.747884 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e279d5d-df37-483b-9bc7-682b48b2dbc4-config-data\") pod \"4e279d5d-df37-483b-9bc7-682b48b2dbc4\" (UID: \"4e279d5d-df37-483b-9bc7-682b48b2dbc4\") " Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.820370 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e279d5d-df37-483b-9bc7-682b48b2dbc4-kube-api-access-c7tvz" (OuterVolumeSpecName: "kube-api-access-c7tvz") pod "4e279d5d-df37-483b-9bc7-682b48b2dbc4" (UID: "4e279d5d-df37-483b-9bc7-682b48b2dbc4"). InnerVolumeSpecName "kube-api-access-c7tvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.825261 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e279d5d-df37-483b-9bc7-682b48b2dbc4-config-data" (OuterVolumeSpecName: "config-data") pod "4e279d5d-df37-483b-9bc7-682b48b2dbc4" (UID: "4e279d5d-df37-483b-9bc7-682b48b2dbc4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.852988 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7tvz\" (UniqueName: \"kubernetes.io/projected/4e279d5d-df37-483b-9bc7-682b48b2dbc4-kube-api-access-c7tvz\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:32 crc kubenswrapper[4775]: I0123 14:33:32.853041 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e279d5d-df37-483b-9bc7-682b48b2dbc4-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.077486 4775 generic.go:334] "Generic (PLEG): container finished" podID="3f14b26a-2160-432f-a6cf-f3fab1f31afc" containerID="f1433b1b1039e1ad5b79126e2b4c0ca66e85ee090af1bd408ecba19e2c872f9a" exitCode=0 Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.077672 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell06ec2-account-delete-t28fh" event={"ID":"3f14b26a-2160-432f-a6cf-f3fab1f31afc","Type":"ContainerDied","Data":"f1433b1b1039e1ad5b79126e2b4c0ca66e85ee090af1bd408ecba19e2c872f9a"} Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.079398 4775 generic.go:334] "Generic (PLEG): container finished" podID="c25c2a05-1d9b-4551-9c92-f04da2897895" containerID="edf9ee8a876623f0b7161ac8eb02db7ebf284b2ff4311bc67eb9dd19aea83eba" exitCode=0 Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.079466 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell1ba32-account-delete-hdrb4" event={"ID":"c25c2a05-1d9b-4551-9c92-f04da2897895","Type":"ContainerDied","Data":"edf9ee8a876623f0b7161ac8eb02db7ebf284b2ff4311bc67eb9dd19aea83eba"} Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.079496 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell1ba32-account-delete-hdrb4" event={"ID":"c25c2a05-1d9b-4551-9c92-f04da2897895","Type":"ContainerStarted","Data":"85fbf5104c3f0c20a528dc6968da2d23d023d46bdce0c270eb7dbfa1de186eab"} Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.080947 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"5a2ad7dd-d80c-4eb4-8531-c2a8208bb760","Type":"ContainerDied","Data":"73c77f39c3e21579fd11ef895bb7a7f0e8b32a22edb065c50cab5df5c5dc9b81"} Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.080988 4775 scope.go:117] "RemoveContainer" containerID="b3037b72f855e3514727ac579826433af99bcec07db67273c699c91b0c386a1b" Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.081000 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.083765 4775 generic.go:334] "Generic (PLEG): container finished" podID="4e279d5d-df37-483b-9bc7-682b48b2dbc4" containerID="e4096d3b7888413c8e0420a378fc8bb781cb9864846833a4e649d155b711ef1a" exitCode=0 Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.084075 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"4e279d5d-df37-483b-9bc7-682b48b2dbc4","Type":"ContainerDied","Data":"e4096d3b7888413c8e0420a378fc8bb781cb9864846833a4e649d155b711ef1a"} Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.084133 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"4e279d5d-df37-483b-9bc7-682b48b2dbc4","Type":"ContainerDied","Data":"004f895311337c942728dd641397c9a9477c224ca4d5348fe186974622dce3f9"} Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.084261 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.101341 4775 scope.go:117] "RemoveContainer" containerID="e4096d3b7888413c8e0420a378fc8bb781cb9864846833a4e649d155b711ef1a" Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.102116 4775 generic.go:334] "Generic (PLEG): container finished" podID="66eb744b-ea4a-4973-8492-2d652c20c447" containerID="8739f351b2bc9ad8d8fe3ea2133ea2116442a4d5b5cf5ef247dd695ec789dddf" exitCode=0 Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.102179 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapi9a1c-account-delete-8fps4" event={"ID":"66eb744b-ea4a-4973-8492-2d652c20c447","Type":"ContainerDied","Data":"8739f351b2bc9ad8d8fe3ea2133ea2116442a4d5b5cf5ef247dd695ec789dddf"} Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.102209 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapi9a1c-account-delete-8fps4" event={"ID":"66eb744b-ea4a-4973-8492-2d652c20c447","Type":"ContainerStarted","Data":"66c2ac152aae3c99146ada164002c3c1330dfef6f8de078ecc93d7dbfb32d407"} Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.104488 4775 generic.go:334] "Generic (PLEG): container finished" podID="daaf7413-398a-4a39-a375-c130187f9726" containerID="3ba5fc19235d3db712a04f428f14e623c0a46cd37e971af89d028a76dc93187a" exitCode=0 Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.104633 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"daaf7413-398a-4a39-a375-c130187f9726","Type":"ContainerDied","Data":"3ba5fc19235d3db712a04f428f14e623c0a46cd37e971af89d028a76dc93187a"} Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.104748 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"daaf7413-398a-4a39-a375-c130187f9726","Type":"ContainerDied","Data":"97fad5da4691bcf418d5d7014464949a4751476840d2d4bd08f07e42875a279d"} Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.104924 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.129282 4775 scope.go:117] "RemoveContainer" containerID="e4096d3b7888413c8e0420a378fc8bb781cb9864846833a4e649d155b711ef1a" Jan 23 14:33:33 crc kubenswrapper[4775]: E0123 14:33:33.131448 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4096d3b7888413c8e0420a378fc8bb781cb9864846833a4e649d155b711ef1a\": container with ID starting with e4096d3b7888413c8e0420a378fc8bb781cb9864846833a4e649d155b711ef1a not found: ID does not exist" containerID="e4096d3b7888413c8e0420a378fc8bb781cb9864846833a4e649d155b711ef1a" Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.131487 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4096d3b7888413c8e0420a378fc8bb781cb9864846833a4e649d155b711ef1a"} err="failed to get container status \"e4096d3b7888413c8e0420a378fc8bb781cb9864846833a4e649d155b711ef1a\": rpc error: code = NotFound desc = could not find container \"e4096d3b7888413c8e0420a378fc8bb781cb9864846833a4e649d155b711ef1a\": container with ID starting with e4096d3b7888413c8e0420a378fc8bb781cb9864846833a4e649d155b711ef1a not found: ID does not exist" Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.131514 4775 scope.go:117] "RemoveContainer" containerID="3ba5fc19235d3db712a04f428f14e623c0a46cd37e971af89d028a76dc93187a" Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.152467 4775 scope.go:117] "RemoveContainer" containerID="3ba5fc19235d3db712a04f428f14e623c0a46cd37e971af89d028a76dc93187a" Jan 23 14:33:33 crc kubenswrapper[4775]: E0123 14:33:33.153039 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ba5fc19235d3db712a04f428f14e623c0a46cd37e971af89d028a76dc93187a\": container with ID starting with 3ba5fc19235d3db712a04f428f14e623c0a46cd37e971af89d028a76dc93187a not found: ID does not exist" containerID="3ba5fc19235d3db712a04f428f14e623c0a46cd37e971af89d028a76dc93187a" Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.153084 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ba5fc19235d3db712a04f428f14e623c0a46cd37e971af89d028a76dc93187a"} err="failed to get container status \"3ba5fc19235d3db712a04f428f14e623c0a46cd37e971af89d028a76dc93187a\": rpc error: code = NotFound desc = could not find container \"3ba5fc19235d3db712a04f428f14e623c0a46cd37e971af89d028a76dc93187a\": container with ID starting with 3ba5fc19235d3db712a04f428f14e623c0a46cd37e971af89d028a76dc93187a not found: ID does not exist" Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.171920 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.180748 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.194508 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.202639 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.208840 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.214688 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.722070 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e279d5d-df37-483b-9bc7-682b48b2dbc4" path="/var/lib/kubelet/pods/4e279d5d-df37-483b-9bc7-682b48b2dbc4/volumes" Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.722905 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a2ad7dd-d80c-4eb4-8531-c2a8208bb760" path="/var/lib/kubelet/pods/5a2ad7dd-d80c-4eb4-8531-c2a8208bb760/volumes" Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.723397 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="daaf7413-398a-4a39-a375-c130187f9726" path="/var/lib/kubelet/pods/daaf7413-398a-4a39-a375-c130187f9726/volumes" Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.759305 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="08cc29e8-1d83-4f1e-b343-a813a06c7f5a" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.163:8775/\": read tcp 10.217.0.2:38066->10.217.0.163:8775: read: connection reset by peer" Jan 23 14:33:33 crc kubenswrapper[4775]: I0123 14:33:33.759328 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="08cc29e8-1d83-4f1e-b343-a813a06c7f5a" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.163:8775/\": read tcp 10.217.0.2:38072->10.217.0.163:8775: read: connection reset by peer" Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.120161 4775 generic.go:334] "Generic (PLEG): container finished" podID="08cc29e8-1d83-4f1e-b343-a813a06c7f5a" containerID="2ec2d8ee517098a55339c83b7adf972f94f667aba8e7519f92926f2a080db62e" exitCode=0 Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.120349 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"08cc29e8-1d83-4f1e-b343-a813a06c7f5a","Type":"ContainerDied","Data":"2ec2d8ee517098a55339c83b7adf972f94f667aba8e7519f92926f2a080db62e"} Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.204665 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.377392 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08cc29e8-1d83-4f1e-b343-a813a06c7f5a-config-data\") pod \"08cc29e8-1d83-4f1e-b343-a813a06c7f5a\" (UID: \"08cc29e8-1d83-4f1e-b343-a813a06c7f5a\") " Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.377483 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skdhl\" (UniqueName: \"kubernetes.io/projected/08cc29e8-1d83-4f1e-b343-a813a06c7f5a-kube-api-access-skdhl\") pod \"08cc29e8-1d83-4f1e-b343-a813a06c7f5a\" (UID: \"08cc29e8-1d83-4f1e-b343-a813a06c7f5a\") " Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.377511 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08cc29e8-1d83-4f1e-b343-a813a06c7f5a-logs\") pod \"08cc29e8-1d83-4f1e-b343-a813a06c7f5a\" (UID: \"08cc29e8-1d83-4f1e-b343-a813a06c7f5a\") " Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.378179 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08cc29e8-1d83-4f1e-b343-a813a06c7f5a-logs" (OuterVolumeSpecName: "logs") pod "08cc29e8-1d83-4f1e-b343-a813a06c7f5a" (UID: "08cc29e8-1d83-4f1e-b343-a813a06c7f5a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.405354 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08cc29e8-1d83-4f1e-b343-a813a06c7f5a-kube-api-access-skdhl" (OuterVolumeSpecName: "kube-api-access-skdhl") pod "08cc29e8-1d83-4f1e-b343-a813a06c7f5a" (UID: "08cc29e8-1d83-4f1e-b343-a813a06c7f5a"). InnerVolumeSpecName "kube-api-access-skdhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.417754 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08cc29e8-1d83-4f1e-b343-a813a06c7f5a-config-data" (OuterVolumeSpecName: "config-data") pod "08cc29e8-1d83-4f1e-b343-a813a06c7f5a" (UID: "08cc29e8-1d83-4f1e-b343-a813a06c7f5a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.441562 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell06ec2-account-delete-t28fh" Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.456629 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapi9a1c-account-delete-8fps4" Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.480743 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08cc29e8-1d83-4f1e-b343-a813a06c7f5a-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.480782 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skdhl\" (UniqueName: \"kubernetes.io/projected/08cc29e8-1d83-4f1e-b343-a813a06c7f5a-kube-api-access-skdhl\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.480793 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/08cc29e8-1d83-4f1e-b343-a813a06c7f5a-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.497467 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell1ba32-account-delete-hdrb4" Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.581687 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f14b26a-2160-432f-a6cf-f3fab1f31afc-operator-scripts\") pod \"3f14b26a-2160-432f-a6cf-f3fab1f31afc\" (UID: \"3f14b26a-2160-432f-a6cf-f3fab1f31afc\") " Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.581812 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mfzx\" (UniqueName: \"kubernetes.io/projected/3f14b26a-2160-432f-a6cf-f3fab1f31afc-kube-api-access-9mfzx\") pod \"3f14b26a-2160-432f-a6cf-f3fab1f31afc\" (UID: \"3f14b26a-2160-432f-a6cf-f3fab1f31afc\") " Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.581849 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l274z\" (UniqueName: \"kubernetes.io/projected/66eb744b-ea4a-4973-8492-2d652c20c447-kube-api-access-l274z\") pod \"66eb744b-ea4a-4973-8492-2d652c20c447\" (UID: \"66eb744b-ea4a-4973-8492-2d652c20c447\") " Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.581944 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66eb744b-ea4a-4973-8492-2d652c20c447-operator-scripts\") pod \"66eb744b-ea4a-4973-8492-2d652c20c447\" (UID: \"66eb744b-ea4a-4973-8492-2d652c20c447\") " Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.582376 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f14b26a-2160-432f-a6cf-f3fab1f31afc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3f14b26a-2160-432f-a6cf-f3fab1f31afc" (UID: "3f14b26a-2160-432f-a6cf-f3fab1f31afc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.582925 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66eb744b-ea4a-4973-8492-2d652c20c447-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "66eb744b-ea4a-4973-8492-2d652c20c447" (UID: "66eb744b-ea4a-4973-8492-2d652c20c447"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.585642 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f14b26a-2160-432f-a6cf-f3fab1f31afc-kube-api-access-9mfzx" (OuterVolumeSpecName: "kube-api-access-9mfzx") pod "3f14b26a-2160-432f-a6cf-f3fab1f31afc" (UID: "3f14b26a-2160-432f-a6cf-f3fab1f31afc"). InnerVolumeSpecName "kube-api-access-9mfzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.586180 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66eb744b-ea4a-4973-8492-2d652c20c447-kube-api-access-l274z" (OuterVolumeSpecName: "kube-api-access-l274z") pod "66eb744b-ea4a-4973-8492-2d652c20c447" (UID: "66eb744b-ea4a-4973-8492-2d652c20c447"). InnerVolumeSpecName "kube-api-access-l274z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.683765 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ls2l\" (UniqueName: \"kubernetes.io/projected/c25c2a05-1d9b-4551-9c92-f04da2897895-kube-api-access-4ls2l\") pod \"c25c2a05-1d9b-4551-9c92-f04da2897895\" (UID: \"c25c2a05-1d9b-4551-9c92-f04da2897895\") " Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.683962 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c25c2a05-1d9b-4551-9c92-f04da2897895-operator-scripts\") pod \"c25c2a05-1d9b-4551-9c92-f04da2897895\" (UID: \"c25c2a05-1d9b-4551-9c92-f04da2897895\") " Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.684415 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c25c2a05-1d9b-4551-9c92-f04da2897895-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c25c2a05-1d9b-4551-9c92-f04da2897895" (UID: "c25c2a05-1d9b-4551-9c92-f04da2897895"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.684526 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mfzx\" (UniqueName: \"kubernetes.io/projected/3f14b26a-2160-432f-a6cf-f3fab1f31afc-kube-api-access-9mfzx\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.684548 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l274z\" (UniqueName: \"kubernetes.io/projected/66eb744b-ea4a-4973-8492-2d652c20c447-kube-api-access-l274z\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.684562 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/66eb744b-ea4a-4973-8492-2d652c20c447-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.684574 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3f14b26a-2160-432f-a6cf-f3fab1f31afc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.688995 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c25c2a05-1d9b-4551-9c92-f04da2897895-kube-api-access-4ls2l" (OuterVolumeSpecName: "kube-api-access-4ls2l") pod "c25c2a05-1d9b-4551-9c92-f04da2897895" (UID: "c25c2a05-1d9b-4551-9c92-f04da2897895"). InnerVolumeSpecName "kube-api-access-4ls2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.785954 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c25c2a05-1d9b-4551-9c92-f04da2897895-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:34 crc kubenswrapper[4775]: I0123 14:33:34.786008 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ls2l\" (UniqueName: \"kubernetes.io/projected/c25c2a05-1d9b-4551-9c92-f04da2897895-kube-api-access-4ls2l\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:35 crc kubenswrapper[4775]: I0123 14:33:35.129165 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapi9a1c-account-delete-8fps4" event={"ID":"66eb744b-ea4a-4973-8492-2d652c20c447","Type":"ContainerDied","Data":"66c2ac152aae3c99146ada164002c3c1330dfef6f8de078ecc93d7dbfb32d407"} Jan 23 14:33:35 crc kubenswrapper[4775]: I0123 14:33:35.129830 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66c2ac152aae3c99146ada164002c3c1330dfef6f8de078ecc93d7dbfb32d407" Jan 23 14:33:35 crc kubenswrapper[4775]: I0123 14:33:35.129213 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapi9a1c-account-delete-8fps4" Jan 23 14:33:35 crc kubenswrapper[4775]: I0123 14:33:35.130898 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell06ec2-account-delete-t28fh" event={"ID":"3f14b26a-2160-432f-a6cf-f3fab1f31afc","Type":"ContainerDied","Data":"d0b63f0b8cc603dfbd347c0bc24572e8875a9ad0337253a29c42786093964643"} Jan 23 14:33:35 crc kubenswrapper[4775]: I0123 14:33:35.131016 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0b63f0b8cc603dfbd347c0bc24572e8875a9ad0337253a29c42786093964643" Jan 23 14:33:35 crc kubenswrapper[4775]: I0123 14:33:35.130966 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell06ec2-account-delete-t28fh" Jan 23 14:33:35 crc kubenswrapper[4775]: I0123 14:33:35.132733 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"08cc29e8-1d83-4f1e-b343-a813a06c7f5a","Type":"ContainerDied","Data":"c04673dffc47a353d8b2f30b1c7c3756c9fa915a864e9169df809bc23ac4884f"} Jan 23 14:33:35 crc kubenswrapper[4775]: I0123 14:33:35.132757 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:33:35 crc kubenswrapper[4775]: I0123 14:33:35.132839 4775 scope.go:117] "RemoveContainer" containerID="2ec2d8ee517098a55339c83b7adf972f94f667aba8e7519f92926f2a080db62e" Jan 23 14:33:35 crc kubenswrapper[4775]: I0123 14:33:35.135713 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell1ba32-account-delete-hdrb4" event={"ID":"c25c2a05-1d9b-4551-9c92-f04da2897895","Type":"ContainerDied","Data":"85fbf5104c3f0c20a528dc6968da2d23d023d46bdce0c270eb7dbfa1de186eab"} Jan 23 14:33:35 crc kubenswrapper[4775]: I0123 14:33:35.135753 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85fbf5104c3f0c20a528dc6968da2d23d023d46bdce0c270eb7dbfa1de186eab" Jan 23 14:33:35 crc kubenswrapper[4775]: I0123 14:33:35.136081 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell1ba32-account-delete-hdrb4" Jan 23 14:33:35 crc kubenswrapper[4775]: I0123 14:33:35.157204 4775 scope.go:117] "RemoveContainer" containerID="64ad254d6ba4ee3740ce23f48d5a83bfdac9d38cd1e51e005d44e141074beaa9" Jan 23 14:33:35 crc kubenswrapper[4775]: I0123 14:33:35.199938 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:33:35 crc kubenswrapper[4775]: I0123 14:33:35.206216 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:33:35 crc kubenswrapper[4775]: I0123 14:33:35.733368 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08cc29e8-1d83-4f1e-b343-a813a06c7f5a" path="/var/lib/kubelet/pods/08cc29e8-1d83-4f1e-b343-a813a06c7f5a/volumes" Jan 23 14:33:36 crc kubenswrapper[4775]: I0123 14:33:36.064132 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-bp7mf"] Jan 23 14:33:36 crc kubenswrapper[4775]: I0123 14:33:36.070027 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-bp7mf"] Jan 23 14:33:36 crc kubenswrapper[4775]: I0123 14:33:36.082201 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-6ec2-account-create-update-6ntlz"] Jan 23 14:33:36 crc kubenswrapper[4775]: I0123 14:33:36.090122 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novacell06ec2-account-delete-t28fh"] Jan 23 14:33:36 crc kubenswrapper[4775]: I0123 14:33:36.095678 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novacell06ec2-account-delete-t28fh"] Jan 23 14:33:36 crc kubenswrapper[4775]: I0123 14:33:36.101149 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-6ec2-account-create-update-6ntlz"] Jan 23 14:33:36 crc kubenswrapper[4775]: I0123 14:33:36.169776 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-pmc6n"] Jan 23 14:33:36 crc kubenswrapper[4775]: I0123 14:33:36.178467 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-pmc6n"] Jan 23 14:33:36 crc kubenswrapper[4775]: I0123 14:33:36.189903 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-ba32-account-create-update-8xsh6"] Jan 23 14:33:36 crc kubenswrapper[4775]: I0123 14:33:36.196066 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novacell1ba32-account-delete-hdrb4"] Jan 23 14:33:36 crc kubenswrapper[4775]: I0123 14:33:36.202503 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-ba32-account-create-update-8xsh6"] Jan 23 14:33:36 crc kubenswrapper[4775]: I0123 14:33:36.207655 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novacell1ba32-account-delete-hdrb4"] Jan 23 14:33:36 crc kubenswrapper[4775]: I0123 14:33:36.269071 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-db-create-hn7kx"] Jan 23 14:33:36 crc kubenswrapper[4775]: I0123 14:33:36.277604 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-db-create-hn7kx"] Jan 23 14:33:36 crc kubenswrapper[4775]: I0123 14:33:36.295589 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-9a1c-account-create-update-lmjgw"] Jan 23 14:33:36 crc kubenswrapper[4775]: I0123 14:33:36.304866 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novaapi9a1c-account-delete-8fps4"] Jan 23 14:33:36 crc kubenswrapper[4775]: I0123 14:33:36.312627 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novaapi9a1c-account-delete-8fps4"] Jan 23 14:33:36 crc kubenswrapper[4775]: I0123 14:33:36.322597 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-9a1c-account-create-update-lmjgw"] Jan 23 14:33:37 crc kubenswrapper[4775]: I0123 14:33:37.053301 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-72a2-account-create-update-4q5xn"] Jan 23 14:33:37 crc kubenswrapper[4775]: I0123 14:33:37.059823 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podUID="5a2ad7dd-d80c-4eb4-8531-c2a8208bb760" containerName="nova-kuttl-cell1-novncproxy-novncproxy" probeResult="failure" output="Get \"http://10.217.0.156:6080/vnc_lite.html\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:33:37 crc kubenswrapper[4775]: I0123 14:33:37.060853 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-72a2-account-create-update-4q5xn"] Jan 23 14:33:37 crc kubenswrapper[4775]: I0123 14:33:37.735837 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f14b26a-2160-432f-a6cf-f3fab1f31afc" path="/var/lib/kubelet/pods/3f14b26a-2160-432f-a6cf-f3fab1f31afc/volumes" Jan 23 14:33:37 crc kubenswrapper[4775]: I0123 14:33:37.737909 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="500dfca1-a7c0-488c-89ba-2d750245e322" path="/var/lib/kubelet/pods/500dfca1-a7c0-488c-89ba-2d750245e322/volumes" Jan 23 14:33:37 crc kubenswrapper[4775]: I0123 14:33:37.748370 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a0d129e-9a65-484c-b8a6-ca5a0120d95d" path="/var/lib/kubelet/pods/5a0d129e-9a65-484c-b8a6-ca5a0120d95d/volumes" Jan 23 14:33:37 crc kubenswrapper[4775]: I0123 14:33:37.749066 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b494b92-3cd1-4b60-853c-a135bb158d8c" path="/var/lib/kubelet/pods/5b494b92-3cd1-4b60-853c-a135bb158d8c/volumes" Jan 23 14:33:37 crc kubenswrapper[4775]: I0123 14:33:37.750149 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66eb744b-ea4a-4973-8492-2d652c20c447" path="/var/lib/kubelet/pods/66eb744b-ea4a-4973-8492-2d652c20c447/volumes" Jan 23 14:33:37 crc kubenswrapper[4775]: I0123 14:33:37.750891 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68223c6c-51af-4369-87c2-368ffe71edb7" path="/var/lib/kubelet/pods/68223c6c-51af-4369-87c2-368ffe71edb7/volumes" Jan 23 14:33:37 crc kubenswrapper[4775]: I0123 14:33:37.751570 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a5345f7-7dc8-4e09-8566-ee1dbb897cce" path="/var/lib/kubelet/pods/7a5345f7-7dc8-4e09-8566-ee1dbb897cce/volumes" Jan 23 14:33:37 crc kubenswrapper[4775]: I0123 14:33:37.753257 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9857104-b2d2-4b42-a96d-2f9f1fadc406" path="/var/lib/kubelet/pods/a9857104-b2d2-4b42-a96d-2f9f1fadc406/volumes" Jan 23 14:33:37 crc kubenswrapper[4775]: I0123 14:33:37.754084 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c25c2a05-1d9b-4551-9c92-f04da2897895" path="/var/lib/kubelet/pods/c25c2a05-1d9b-4551-9c92-f04da2897895/volumes" Jan 23 14:33:37 crc kubenswrapper[4775]: I0123 14:33:37.754905 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffe262ed-6f79-4dad-91c6-168b164a6459" path="/var/lib/kubelet/pods/ffe262ed-6f79-4dad-91c6-168b164a6459/volumes" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.077451 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/placement-fb53-account-create-update-mth7w"] Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.091487 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-db-create-8k7zh"] Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.098227 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/placement-db-create-qn6k5"] Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.107434 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-db-create-8k7zh"] Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.115214 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/placement-fb53-account-create-update-mth7w"] Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.123318 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/placement-db-create-qn6k5"] Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.702610 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-db-create-5h6rf"] Jan 23 14:33:38 crc kubenswrapper[4775]: E0123 14:33:38.703104 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08cc29e8-1d83-4f1e-b343-a813a06c7f5a" containerName="nova-kuttl-metadata-log" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.703135 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="08cc29e8-1d83-4f1e-b343-a813a06c7f5a" containerName="nova-kuttl-metadata-log" Jan 23 14:33:38 crc kubenswrapper[4775]: E0123 14:33:38.703161 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66eb744b-ea4a-4973-8492-2d652c20c447" containerName="mariadb-account-delete" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.703173 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="66eb744b-ea4a-4973-8492-2d652c20c447" containerName="mariadb-account-delete" Jan 23 14:33:38 crc kubenswrapper[4775]: E0123 14:33:38.703193 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e279d5d-df37-483b-9bc7-682b48b2dbc4" containerName="nova-kuttl-cell1-conductor-conductor" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.703206 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e279d5d-df37-483b-9bc7-682b48b2dbc4" containerName="nova-kuttl-cell1-conductor-conductor" Jan 23 14:33:38 crc kubenswrapper[4775]: E0123 14:33:38.703230 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f14b26a-2160-432f-a6cf-f3fab1f31afc" containerName="mariadb-account-delete" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.703242 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f14b26a-2160-432f-a6cf-f3fab1f31afc" containerName="mariadb-account-delete" Jan 23 14:33:38 crc kubenswrapper[4775]: E0123 14:33:38.703268 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a2ad7dd-d80c-4eb4-8531-c2a8208bb760" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.703281 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a2ad7dd-d80c-4eb4-8531-c2a8208bb760" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 23 14:33:38 crc kubenswrapper[4775]: E0123 14:33:38.703294 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daaf7413-398a-4a39-a375-c130187f9726" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.703306 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="daaf7413-398a-4a39-a375-c130187f9726" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:33:38 crc kubenswrapper[4775]: E0123 14:33:38.703325 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08cc29e8-1d83-4f1e-b343-a813a06c7f5a" containerName="nova-kuttl-metadata-metadata" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.703337 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="08cc29e8-1d83-4f1e-b343-a813a06c7f5a" containerName="nova-kuttl-metadata-metadata" Jan 23 14:33:38 crc kubenswrapper[4775]: E0123 14:33:38.703353 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c25c2a05-1d9b-4551-9c92-f04da2897895" containerName="mariadb-account-delete" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.703364 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="c25c2a05-1d9b-4551-9c92-f04da2897895" containerName="mariadb-account-delete" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.703606 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f14b26a-2160-432f-a6cf-f3fab1f31afc" containerName="mariadb-account-delete" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.703623 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="66eb744b-ea4a-4973-8492-2d652c20c447" containerName="mariadb-account-delete" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.703639 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="08cc29e8-1d83-4f1e-b343-a813a06c7f5a" containerName="nova-kuttl-metadata-metadata" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.703662 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="daaf7413-398a-4a39-a375-c130187f9726" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.703685 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e279d5d-df37-483b-9bc7-682b48b2dbc4" containerName="nova-kuttl-cell1-conductor-conductor" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.703700 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a2ad7dd-d80c-4eb4-8531-c2a8208bb760" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.703723 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="08cc29e8-1d83-4f1e-b343-a813a06c7f5a" containerName="nova-kuttl-metadata-log" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.703741 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="c25c2a05-1d9b-4551-9c92-f04da2897895" containerName="mariadb-account-delete" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.704535 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-5h6rf" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.714170 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-5h6rf"] Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.788906 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-nr9cr"] Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.790004 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-nr9cr" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.795629 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-nr9cr"] Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.856618 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e86f57ad-0eba-4794-8f64-f70609e535e8-operator-scripts\") pod \"nova-api-db-create-5h6rf\" (UID: \"e86f57ad-0eba-4794-8f64-f70609e535e8\") " pod="nova-kuttl-default/nova-api-db-create-5h6rf" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.856899 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l58vv\" (UniqueName: \"kubernetes.io/projected/e86f57ad-0eba-4794-8f64-f70609e535e8-kube-api-access-l58vv\") pod \"nova-api-db-create-5h6rf\" (UID: \"e86f57ad-0eba-4794-8f64-f70609e535e8\") " pod="nova-kuttl-default/nova-api-db-create-5h6rf" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.893559 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-p9ljs"] Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.895111 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-p9ljs" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.899588 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-a3ac-account-create-update-phbcc"] Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.900591 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-a3ac-account-create-update-phbcc" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.902109 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-api-db-secret" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.905882 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-p9ljs"] Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.917076 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-a3ac-account-create-update-phbcc"] Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.958486 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e86f57ad-0eba-4794-8f64-f70609e535e8-operator-scripts\") pod \"nova-api-db-create-5h6rf\" (UID: \"e86f57ad-0eba-4794-8f64-f70609e535e8\") " pod="nova-kuttl-default/nova-api-db-create-5h6rf" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.958571 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l58vv\" (UniqueName: \"kubernetes.io/projected/e86f57ad-0eba-4794-8f64-f70609e535e8-kube-api-access-l58vv\") pod \"nova-api-db-create-5h6rf\" (UID: \"e86f57ad-0eba-4794-8f64-f70609e535e8\") " pod="nova-kuttl-default/nova-api-db-create-5h6rf" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.958616 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m56dd\" (UniqueName: \"kubernetes.io/projected/42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c-kube-api-access-m56dd\") pod \"nova-cell0-db-create-nr9cr\" (UID: \"42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c\") " pod="nova-kuttl-default/nova-cell0-db-create-nr9cr" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.958646 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c-operator-scripts\") pod \"nova-cell0-db-create-nr9cr\" (UID: \"42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c\") " pod="nova-kuttl-default/nova-cell0-db-create-nr9cr" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.959351 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e86f57ad-0eba-4794-8f64-f70609e535e8-operator-scripts\") pod \"nova-api-db-create-5h6rf\" (UID: \"e86f57ad-0eba-4794-8f64-f70609e535e8\") " pod="nova-kuttl-default/nova-api-db-create-5h6rf" Jan 23 14:33:38 crc kubenswrapper[4775]: I0123 14:33:38.984091 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l58vv\" (UniqueName: \"kubernetes.io/projected/e86f57ad-0eba-4794-8f64-f70609e535e8-kube-api-access-l58vv\") pod \"nova-api-db-create-5h6rf\" (UID: \"e86f57ad-0eba-4794-8f64-f70609e535e8\") " pod="nova-kuttl-default/nova-api-db-create-5h6rf" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.022648 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-5h6rf" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.060064 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c-operator-scripts\") pod \"nova-cell0-db-create-nr9cr\" (UID: \"42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c\") " pod="nova-kuttl-default/nova-cell0-db-create-nr9cr" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.060141 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4b0dbf6-948b-45c4-b5a0-6027f816c873-operator-scripts\") pod \"nova-api-a3ac-account-create-update-phbcc\" (UID: \"c4b0dbf6-948b-45c4-b5a0-6027f816c873\") " pod="nova-kuttl-default/nova-api-a3ac-account-create-update-phbcc" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.060200 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/112204a1-12d6-49b5-b97e-de4daab49dcf-operator-scripts\") pod \"nova-cell1-db-create-p9ljs\" (UID: \"112204a1-12d6-49b5-b97e-de4daab49dcf\") " pod="nova-kuttl-default/nova-cell1-db-create-p9ljs" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.060291 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn74t\" (UniqueName: \"kubernetes.io/projected/112204a1-12d6-49b5-b97e-de4daab49dcf-kube-api-access-vn74t\") pod \"nova-cell1-db-create-p9ljs\" (UID: \"112204a1-12d6-49b5-b97e-de4daab49dcf\") " pod="nova-kuttl-default/nova-cell1-db-create-p9ljs" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.060315 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drkqv\" (UniqueName: \"kubernetes.io/projected/c4b0dbf6-948b-45c4-b5a0-6027f816c873-kube-api-access-drkqv\") pod \"nova-api-a3ac-account-create-update-phbcc\" (UID: \"c4b0dbf6-948b-45c4-b5a0-6027f816c873\") " pod="nova-kuttl-default/nova-api-a3ac-account-create-update-phbcc" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.060339 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m56dd\" (UniqueName: \"kubernetes.io/projected/42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c-kube-api-access-m56dd\") pod \"nova-cell0-db-create-nr9cr\" (UID: \"42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c\") " pod="nova-kuttl-default/nova-cell0-db-create-nr9cr" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.061311 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c-operator-scripts\") pod \"nova-cell0-db-create-nr9cr\" (UID: \"42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c\") " pod="nova-kuttl-default/nova-cell0-db-create-nr9cr" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.080360 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m56dd\" (UniqueName: \"kubernetes.io/projected/42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c-kube-api-access-m56dd\") pod \"nova-cell0-db-create-nr9cr\" (UID: \"42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c\") " pod="nova-kuttl-default/nova-cell0-db-create-nr9cr" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.103815 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-nr9cr" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.119428 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-4dcc-account-create-update-7fftw"] Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.120284 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-4dcc-account-create-update-7fftw" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.121951 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell0-db-secret" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.140543 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-4dcc-account-create-update-7fftw"] Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.161446 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn74t\" (UniqueName: \"kubernetes.io/projected/112204a1-12d6-49b5-b97e-de4daab49dcf-kube-api-access-vn74t\") pod \"nova-cell1-db-create-p9ljs\" (UID: \"112204a1-12d6-49b5-b97e-de4daab49dcf\") " pod="nova-kuttl-default/nova-cell1-db-create-p9ljs" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.161488 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drkqv\" (UniqueName: \"kubernetes.io/projected/c4b0dbf6-948b-45c4-b5a0-6027f816c873-kube-api-access-drkqv\") pod \"nova-api-a3ac-account-create-update-phbcc\" (UID: \"c4b0dbf6-948b-45c4-b5a0-6027f816c873\") " pod="nova-kuttl-default/nova-api-a3ac-account-create-update-phbcc" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.161544 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4b0dbf6-948b-45c4-b5a0-6027f816c873-operator-scripts\") pod \"nova-api-a3ac-account-create-update-phbcc\" (UID: \"c4b0dbf6-948b-45c4-b5a0-6027f816c873\") " pod="nova-kuttl-default/nova-api-a3ac-account-create-update-phbcc" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.161581 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/112204a1-12d6-49b5-b97e-de4daab49dcf-operator-scripts\") pod \"nova-cell1-db-create-p9ljs\" (UID: \"112204a1-12d6-49b5-b97e-de4daab49dcf\") " pod="nova-kuttl-default/nova-cell1-db-create-p9ljs" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.162169 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/112204a1-12d6-49b5-b97e-de4daab49dcf-operator-scripts\") pod \"nova-cell1-db-create-p9ljs\" (UID: \"112204a1-12d6-49b5-b97e-de4daab49dcf\") " pod="nova-kuttl-default/nova-cell1-db-create-p9ljs" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.162969 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4b0dbf6-948b-45c4-b5a0-6027f816c873-operator-scripts\") pod \"nova-api-a3ac-account-create-update-phbcc\" (UID: \"c4b0dbf6-948b-45c4-b5a0-6027f816c873\") " pod="nova-kuttl-default/nova-api-a3ac-account-create-update-phbcc" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.191556 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn74t\" (UniqueName: \"kubernetes.io/projected/112204a1-12d6-49b5-b97e-de4daab49dcf-kube-api-access-vn74t\") pod \"nova-cell1-db-create-p9ljs\" (UID: \"112204a1-12d6-49b5-b97e-de4daab49dcf\") " pod="nova-kuttl-default/nova-cell1-db-create-p9ljs" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.209010 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drkqv\" (UniqueName: \"kubernetes.io/projected/c4b0dbf6-948b-45c4-b5a0-6027f816c873-kube-api-access-drkqv\") pod \"nova-api-a3ac-account-create-update-phbcc\" (UID: \"c4b0dbf6-948b-45c4-b5a0-6027f816c873\") " pod="nova-kuttl-default/nova-api-a3ac-account-create-update-phbcc" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.211103 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-p9ljs" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.217225 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-a3ac-account-create-update-phbcc" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.262354 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5wb4\" (UniqueName: \"kubernetes.io/projected/7ff6b200-7364-4e13-956d-628abd48cbaa-kube-api-access-g5wb4\") pod \"nova-cell0-4dcc-account-create-update-7fftw\" (UID: \"7ff6b200-7364-4e13-956d-628abd48cbaa\") " pod="nova-kuttl-default/nova-cell0-4dcc-account-create-update-7fftw" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.262488 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ff6b200-7364-4e13-956d-628abd48cbaa-operator-scripts\") pod \"nova-cell0-4dcc-account-create-update-7fftw\" (UID: \"7ff6b200-7364-4e13-956d-628abd48cbaa\") " pod="nova-kuttl-default/nova-cell0-4dcc-account-create-update-7fftw" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.340574 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-1814-account-create-update-nnb6t"] Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.344995 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-1814-account-create-update-nnb6t" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.348623 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell1-db-secret" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.353286 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-1814-account-create-update-nnb6t"] Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.366383 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ff6b200-7364-4e13-956d-628abd48cbaa-operator-scripts\") pod \"nova-cell0-4dcc-account-create-update-7fftw\" (UID: \"7ff6b200-7364-4e13-956d-628abd48cbaa\") " pod="nova-kuttl-default/nova-cell0-4dcc-account-create-update-7fftw" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.366475 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5wb4\" (UniqueName: \"kubernetes.io/projected/7ff6b200-7364-4e13-956d-628abd48cbaa-kube-api-access-g5wb4\") pod \"nova-cell0-4dcc-account-create-update-7fftw\" (UID: \"7ff6b200-7364-4e13-956d-628abd48cbaa\") " pod="nova-kuttl-default/nova-cell0-4dcc-account-create-update-7fftw" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.367439 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ff6b200-7364-4e13-956d-628abd48cbaa-operator-scripts\") pod \"nova-cell0-4dcc-account-create-update-7fftw\" (UID: \"7ff6b200-7364-4e13-956d-628abd48cbaa\") " pod="nova-kuttl-default/nova-cell0-4dcc-account-create-update-7fftw" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.384661 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5wb4\" (UniqueName: \"kubernetes.io/projected/7ff6b200-7364-4e13-956d-628abd48cbaa-kube-api-access-g5wb4\") pod \"nova-cell0-4dcc-account-create-update-7fftw\" (UID: \"7ff6b200-7364-4e13-956d-628abd48cbaa\") " pod="nova-kuttl-default/nova-cell0-4dcc-account-create-update-7fftw" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.435112 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-4dcc-account-create-update-7fftw" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.467898 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tbdp\" (UniqueName: \"kubernetes.io/projected/b95fa161-1171-4dc2-b0be-3aa279cb717d-kube-api-access-7tbdp\") pod \"nova-cell1-1814-account-create-update-nnb6t\" (UID: \"b95fa161-1171-4dc2-b0be-3aa279cb717d\") " pod="nova-kuttl-default/nova-cell1-1814-account-create-update-nnb6t" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.468223 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b95fa161-1171-4dc2-b0be-3aa279cb717d-operator-scripts\") pod \"nova-cell1-1814-account-create-update-nnb6t\" (UID: \"b95fa161-1171-4dc2-b0be-3aa279cb717d\") " pod="nova-kuttl-default/nova-cell1-1814-account-create-update-nnb6t" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.569445 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tbdp\" (UniqueName: \"kubernetes.io/projected/b95fa161-1171-4dc2-b0be-3aa279cb717d-kube-api-access-7tbdp\") pod \"nova-cell1-1814-account-create-update-nnb6t\" (UID: \"b95fa161-1171-4dc2-b0be-3aa279cb717d\") " pod="nova-kuttl-default/nova-cell1-1814-account-create-update-nnb6t" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.569531 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b95fa161-1171-4dc2-b0be-3aa279cb717d-operator-scripts\") pod \"nova-cell1-1814-account-create-update-nnb6t\" (UID: \"b95fa161-1171-4dc2-b0be-3aa279cb717d\") " pod="nova-kuttl-default/nova-cell1-1814-account-create-update-nnb6t" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.570393 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b95fa161-1171-4dc2-b0be-3aa279cb717d-operator-scripts\") pod \"nova-cell1-1814-account-create-update-nnb6t\" (UID: \"b95fa161-1171-4dc2-b0be-3aa279cb717d\") " pod="nova-kuttl-default/nova-cell1-1814-account-create-update-nnb6t" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.584931 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tbdp\" (UniqueName: \"kubernetes.io/projected/b95fa161-1171-4dc2-b0be-3aa279cb717d-kube-api-access-7tbdp\") pod \"nova-cell1-1814-account-create-update-nnb6t\" (UID: \"b95fa161-1171-4dc2-b0be-3aa279cb717d\") " pod="nova-kuttl-default/nova-cell1-1814-account-create-update-nnb6t" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.669329 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-1814-account-create-update-nnb6t" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.685336 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-nr9cr"] Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.693063 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-5h6rf"] Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.740279 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2887a864-f392-4887-8b38-bde90ef8f18d" path="/var/lib/kubelet/pods/2887a864-f392-4887-8b38-bde90ef8f18d/volumes" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.740788 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7a04db9-60c9-4bce-8100-18a4134d0c86" path="/var/lib/kubelet/pods/c7a04db9-60c9-4bce-8100-18a4134d0c86/volumes" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.741278 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da477c0f-52c9-4e94-894f-d953e46afd95" path="/var/lib/kubelet/pods/da477c0f-52c9-4e94-894f-d953e46afd95/volumes" Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.906150 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-4dcc-account-create-update-7fftw"] Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.922847 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-p9ljs"] Jan 23 14:33:39 crc kubenswrapper[4775]: I0123 14:33:39.929698 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-a3ac-account-create-update-phbcc"] Jan 23 14:33:40 crc kubenswrapper[4775]: I0123 14:33:40.096017 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-1814-account-create-update-nnb6t"] Jan 23 14:33:40 crc kubenswrapper[4775]: W0123 14:33:40.178693 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb95fa161_1171_4dc2_b0be_3aa279cb717d.slice/crio-3c229e0a1a59418ea93ecd0ed3eea10a36b0db65c956d12878a6279bf8ef6a06 WatchSource:0}: Error finding container 3c229e0a1a59418ea93ecd0ed3eea10a36b0db65c956d12878a6279bf8ef6a06: Status 404 returned error can't find the container with id 3c229e0a1a59418ea93ecd0ed3eea10a36b0db65c956d12878a6279bf8ef6a06 Jan 23 14:33:40 crc kubenswrapper[4775]: I0123 14:33:40.221507 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-1814-account-create-update-nnb6t" event={"ID":"b95fa161-1171-4dc2-b0be-3aa279cb717d","Type":"ContainerStarted","Data":"3c229e0a1a59418ea93ecd0ed3eea10a36b0db65c956d12878a6279bf8ef6a06"} Jan 23 14:33:40 crc kubenswrapper[4775]: I0123 14:33:40.223773 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-p9ljs" event={"ID":"112204a1-12d6-49b5-b97e-de4daab49dcf","Type":"ContainerStarted","Data":"1e32ef65ab7f89fb7990abe8d40495bef02031f8258cd2be4cdb5fd231e0255d"} Jan 23 14:33:40 crc kubenswrapper[4775]: I0123 14:33:40.226359 4775 generic.go:334] "Generic (PLEG): container finished" podID="e86f57ad-0eba-4794-8f64-f70609e535e8" containerID="7a9edcf7a6eef68f25783c87ff91eb1a9a70ab35e82018e110b39960153337f3" exitCode=0 Jan 23 14:33:40 crc kubenswrapper[4775]: I0123 14:33:40.226428 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-5h6rf" event={"ID":"e86f57ad-0eba-4794-8f64-f70609e535e8","Type":"ContainerDied","Data":"7a9edcf7a6eef68f25783c87ff91eb1a9a70ab35e82018e110b39960153337f3"} Jan 23 14:33:40 crc kubenswrapper[4775]: I0123 14:33:40.226454 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-5h6rf" event={"ID":"e86f57ad-0eba-4794-8f64-f70609e535e8","Type":"ContainerStarted","Data":"9ecbabb0e447ddd1e5b163ce8b247ab0bfb5995d497181c22b82fa1a883915e6"} Jan 23 14:33:40 crc kubenswrapper[4775]: I0123 14:33:40.227853 4775 generic.go:334] "Generic (PLEG): container finished" podID="42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c" containerID="d5a625216c448145f1513473de681abbe074c66d1f215fbd1239d870733f21c4" exitCode=0 Jan 23 14:33:40 crc kubenswrapper[4775]: I0123 14:33:40.227907 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-nr9cr" event={"ID":"42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c","Type":"ContainerDied","Data":"d5a625216c448145f1513473de681abbe074c66d1f215fbd1239d870733f21c4"} Jan 23 14:33:40 crc kubenswrapper[4775]: I0123 14:33:40.227925 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-nr9cr" event={"ID":"42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c","Type":"ContainerStarted","Data":"aaa44cc4c098096ebf4a458b111b86f4cf08bd9fe46316f91d0556773a1e009d"} Jan 23 14:33:40 crc kubenswrapper[4775]: I0123 14:33:40.229462 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-a3ac-account-create-update-phbcc" event={"ID":"c4b0dbf6-948b-45c4-b5a0-6027f816c873","Type":"ContainerStarted","Data":"d0ab6bb65e42df4d16f7f2b6be6fe5d45e9f1defc29fc6849b0c8bb5cacb8e34"} Jan 23 14:33:40 crc kubenswrapper[4775]: I0123 14:33:40.232021 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-4dcc-account-create-update-7fftw" event={"ID":"7ff6b200-7364-4e13-956d-628abd48cbaa","Type":"ContainerStarted","Data":"f4cd5135088bb3777d0dc8183607f28098307d3bcb8182377933eaaa9099f247"} Jan 23 14:33:41 crc kubenswrapper[4775]: I0123 14:33:41.244385 4775 generic.go:334] "Generic (PLEG): container finished" podID="b95fa161-1171-4dc2-b0be-3aa279cb717d" containerID="5660aa2517d0892f37febd6e7336a548ede2e720ab7264d812ad264a50eb46b2" exitCode=0 Jan 23 14:33:41 crc kubenswrapper[4775]: I0123 14:33:41.244491 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-1814-account-create-update-nnb6t" event={"ID":"b95fa161-1171-4dc2-b0be-3aa279cb717d","Type":"ContainerDied","Data":"5660aa2517d0892f37febd6e7336a548ede2e720ab7264d812ad264a50eb46b2"} Jan 23 14:33:41 crc kubenswrapper[4775]: I0123 14:33:41.248029 4775 generic.go:334] "Generic (PLEG): container finished" podID="112204a1-12d6-49b5-b97e-de4daab49dcf" containerID="f00011167bc09af603822453b51182838d413ff1ad414892e875b504e0751ab6" exitCode=0 Jan 23 14:33:41 crc kubenswrapper[4775]: I0123 14:33:41.248096 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-p9ljs" event={"ID":"112204a1-12d6-49b5-b97e-de4daab49dcf","Type":"ContainerDied","Data":"f00011167bc09af603822453b51182838d413ff1ad414892e875b504e0751ab6"} Jan 23 14:33:41 crc kubenswrapper[4775]: I0123 14:33:41.250660 4775 generic.go:334] "Generic (PLEG): container finished" podID="c4b0dbf6-948b-45c4-b5a0-6027f816c873" containerID="61ab9533e70d4b69baa5f710542bcb0de5d0a3981f871d6eb9f7dfa31ff05f49" exitCode=0 Jan 23 14:33:41 crc kubenswrapper[4775]: I0123 14:33:41.250750 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-a3ac-account-create-update-phbcc" event={"ID":"c4b0dbf6-948b-45c4-b5a0-6027f816c873","Type":"ContainerDied","Data":"61ab9533e70d4b69baa5f710542bcb0de5d0a3981f871d6eb9f7dfa31ff05f49"} Jan 23 14:33:41 crc kubenswrapper[4775]: I0123 14:33:41.253238 4775 generic.go:334] "Generic (PLEG): container finished" podID="7ff6b200-7364-4e13-956d-628abd48cbaa" containerID="de44f8ed18b4260ec3e0e35481cd929500e4cac5322c792037bcf7ae3fda7a94" exitCode=0 Jan 23 14:33:41 crc kubenswrapper[4775]: I0123 14:33:41.253305 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-4dcc-account-create-update-7fftw" event={"ID":"7ff6b200-7364-4e13-956d-628abd48cbaa","Type":"ContainerDied","Data":"de44f8ed18b4260ec3e0e35481cd929500e4cac5322c792037bcf7ae3fda7a94"} Jan 23 14:33:41 crc kubenswrapper[4775]: I0123 14:33:41.738928 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-nr9cr" Jan 23 14:33:41 crc kubenswrapper[4775]: I0123 14:33:41.803018 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-5h6rf" Jan 23 14:33:41 crc kubenswrapper[4775]: I0123 14:33:41.906745 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l58vv\" (UniqueName: \"kubernetes.io/projected/e86f57ad-0eba-4794-8f64-f70609e535e8-kube-api-access-l58vv\") pod \"e86f57ad-0eba-4794-8f64-f70609e535e8\" (UID: \"e86f57ad-0eba-4794-8f64-f70609e535e8\") " Jan 23 14:33:41 crc kubenswrapper[4775]: I0123 14:33:41.906912 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c-operator-scripts\") pod \"42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c\" (UID: \"42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c\") " Jan 23 14:33:41 crc kubenswrapper[4775]: I0123 14:33:41.906998 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e86f57ad-0eba-4794-8f64-f70609e535e8-operator-scripts\") pod \"e86f57ad-0eba-4794-8f64-f70609e535e8\" (UID: \"e86f57ad-0eba-4794-8f64-f70609e535e8\") " Jan 23 14:33:41 crc kubenswrapper[4775]: I0123 14:33:41.907018 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m56dd\" (UniqueName: \"kubernetes.io/projected/42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c-kube-api-access-m56dd\") pod \"42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c\" (UID: \"42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c\") " Jan 23 14:33:41 crc kubenswrapper[4775]: I0123 14:33:41.908411 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e86f57ad-0eba-4794-8f64-f70609e535e8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e86f57ad-0eba-4794-8f64-f70609e535e8" (UID: "e86f57ad-0eba-4794-8f64-f70609e535e8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:33:41 crc kubenswrapper[4775]: I0123 14:33:41.909123 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c" (UID: "42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:33:41 crc kubenswrapper[4775]: I0123 14:33:41.913942 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e86f57ad-0eba-4794-8f64-f70609e535e8-kube-api-access-l58vv" (OuterVolumeSpecName: "kube-api-access-l58vv") pod "e86f57ad-0eba-4794-8f64-f70609e535e8" (UID: "e86f57ad-0eba-4794-8f64-f70609e535e8"). InnerVolumeSpecName "kube-api-access-l58vv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:41 crc kubenswrapper[4775]: I0123 14:33:41.931685 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c-kube-api-access-m56dd" (OuterVolumeSpecName: "kube-api-access-m56dd") pod "42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c" (UID: "42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c"). InnerVolumeSpecName "kube-api-access-m56dd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:42 crc kubenswrapper[4775]: I0123 14:33:42.008518 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:42 crc kubenswrapper[4775]: I0123 14:33:42.008549 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e86f57ad-0eba-4794-8f64-f70609e535e8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:42 crc kubenswrapper[4775]: I0123 14:33:42.008559 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m56dd\" (UniqueName: \"kubernetes.io/projected/42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c-kube-api-access-m56dd\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:42 crc kubenswrapper[4775]: I0123 14:33:42.008571 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l58vv\" (UniqueName: \"kubernetes.io/projected/e86f57ad-0eba-4794-8f64-f70609e535e8-kube-api-access-l58vv\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:42 crc kubenswrapper[4775]: I0123 14:33:42.277126 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-5h6rf" Jan 23 14:33:42 crc kubenswrapper[4775]: I0123 14:33:42.277134 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-5h6rf" event={"ID":"e86f57ad-0eba-4794-8f64-f70609e535e8","Type":"ContainerDied","Data":"9ecbabb0e447ddd1e5b163ce8b247ab0bfb5995d497181c22b82fa1a883915e6"} Jan 23 14:33:42 crc kubenswrapper[4775]: I0123 14:33:42.277399 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ecbabb0e447ddd1e5b163ce8b247ab0bfb5995d497181c22b82fa1a883915e6" Jan 23 14:33:42 crc kubenswrapper[4775]: I0123 14:33:42.279982 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-nr9cr" event={"ID":"42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c","Type":"ContainerDied","Data":"aaa44cc4c098096ebf4a458b111b86f4cf08bd9fe46316f91d0556773a1e009d"} Jan 23 14:33:42 crc kubenswrapper[4775]: I0123 14:33:42.280040 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-nr9cr" Jan 23 14:33:42 crc kubenswrapper[4775]: I0123 14:33:42.280045 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aaa44cc4c098096ebf4a458b111b86f4cf08bd9fe46316f91d0556773a1e009d" Jan 23 14:33:42 crc kubenswrapper[4775]: I0123 14:33:42.531765 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-p9ljs" Jan 23 14:33:42 crc kubenswrapper[4775]: I0123 14:33:42.619181 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/112204a1-12d6-49b5-b97e-de4daab49dcf-operator-scripts\") pod \"112204a1-12d6-49b5-b97e-de4daab49dcf\" (UID: \"112204a1-12d6-49b5-b97e-de4daab49dcf\") " Jan 23 14:33:42 crc kubenswrapper[4775]: I0123 14:33:42.619369 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vn74t\" (UniqueName: \"kubernetes.io/projected/112204a1-12d6-49b5-b97e-de4daab49dcf-kube-api-access-vn74t\") pod \"112204a1-12d6-49b5-b97e-de4daab49dcf\" (UID: \"112204a1-12d6-49b5-b97e-de4daab49dcf\") " Jan 23 14:33:42 crc kubenswrapper[4775]: I0123 14:33:42.620759 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/112204a1-12d6-49b5-b97e-de4daab49dcf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "112204a1-12d6-49b5-b97e-de4daab49dcf" (UID: "112204a1-12d6-49b5-b97e-de4daab49dcf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:33:42 crc kubenswrapper[4775]: I0123 14:33:42.627512 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/112204a1-12d6-49b5-b97e-de4daab49dcf-kube-api-access-vn74t" (OuterVolumeSpecName: "kube-api-access-vn74t") pod "112204a1-12d6-49b5-b97e-de4daab49dcf" (UID: "112204a1-12d6-49b5-b97e-de4daab49dcf"). InnerVolumeSpecName "kube-api-access-vn74t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:42 crc kubenswrapper[4775]: I0123 14:33:42.721758 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vn74t\" (UniqueName: \"kubernetes.io/projected/112204a1-12d6-49b5-b97e-de4daab49dcf-kube-api-access-vn74t\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:42 crc kubenswrapper[4775]: I0123 14:33:42.721794 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/112204a1-12d6-49b5-b97e-de4daab49dcf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:42 crc kubenswrapper[4775]: I0123 14:33:42.880718 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-a3ac-account-create-update-phbcc" Jan 23 14:33:42 crc kubenswrapper[4775]: I0123 14:33:42.887153 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-1814-account-create-update-nnb6t" Jan 23 14:33:42 crc kubenswrapper[4775]: I0123 14:33:42.896718 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-4dcc-account-create-update-7fftw" Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.028414 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7tbdp\" (UniqueName: \"kubernetes.io/projected/b95fa161-1171-4dc2-b0be-3aa279cb717d-kube-api-access-7tbdp\") pod \"b95fa161-1171-4dc2-b0be-3aa279cb717d\" (UID: \"b95fa161-1171-4dc2-b0be-3aa279cb717d\") " Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.028459 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4b0dbf6-948b-45c4-b5a0-6027f816c873-operator-scripts\") pod \"c4b0dbf6-948b-45c4-b5a0-6027f816c873\" (UID: \"c4b0dbf6-948b-45c4-b5a0-6027f816c873\") " Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.028487 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ff6b200-7364-4e13-956d-628abd48cbaa-operator-scripts\") pod \"7ff6b200-7364-4e13-956d-628abd48cbaa\" (UID: \"7ff6b200-7364-4e13-956d-628abd48cbaa\") " Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.028528 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5wb4\" (UniqueName: \"kubernetes.io/projected/7ff6b200-7364-4e13-956d-628abd48cbaa-kube-api-access-g5wb4\") pod \"7ff6b200-7364-4e13-956d-628abd48cbaa\" (UID: \"7ff6b200-7364-4e13-956d-628abd48cbaa\") " Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.028569 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b95fa161-1171-4dc2-b0be-3aa279cb717d-operator-scripts\") pod \"b95fa161-1171-4dc2-b0be-3aa279cb717d\" (UID: \"b95fa161-1171-4dc2-b0be-3aa279cb717d\") " Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.028698 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drkqv\" (UniqueName: \"kubernetes.io/projected/c4b0dbf6-948b-45c4-b5a0-6027f816c873-kube-api-access-drkqv\") pod \"c4b0dbf6-948b-45c4-b5a0-6027f816c873\" (UID: \"c4b0dbf6-948b-45c4-b5a0-6027f816c873\") " Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.029517 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4b0dbf6-948b-45c4-b5a0-6027f816c873-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c4b0dbf6-948b-45c4-b5a0-6027f816c873" (UID: "c4b0dbf6-948b-45c4-b5a0-6027f816c873"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.030125 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b95fa161-1171-4dc2-b0be-3aa279cb717d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b95fa161-1171-4dc2-b0be-3aa279cb717d" (UID: "b95fa161-1171-4dc2-b0be-3aa279cb717d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.030256 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ff6b200-7364-4e13-956d-628abd48cbaa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7ff6b200-7364-4e13-956d-628abd48cbaa" (UID: "7ff6b200-7364-4e13-956d-628abd48cbaa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.034133 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4b0dbf6-948b-45c4-b5a0-6027f816c873-kube-api-access-drkqv" (OuterVolumeSpecName: "kube-api-access-drkqv") pod "c4b0dbf6-948b-45c4-b5a0-6027f816c873" (UID: "c4b0dbf6-948b-45c4-b5a0-6027f816c873"). InnerVolumeSpecName "kube-api-access-drkqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.034870 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b95fa161-1171-4dc2-b0be-3aa279cb717d-kube-api-access-7tbdp" (OuterVolumeSpecName: "kube-api-access-7tbdp") pod "b95fa161-1171-4dc2-b0be-3aa279cb717d" (UID: "b95fa161-1171-4dc2-b0be-3aa279cb717d"). InnerVolumeSpecName "kube-api-access-7tbdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.035948 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ff6b200-7364-4e13-956d-628abd48cbaa-kube-api-access-g5wb4" (OuterVolumeSpecName: "kube-api-access-g5wb4") pod "7ff6b200-7364-4e13-956d-628abd48cbaa" (UID: "7ff6b200-7364-4e13-956d-628abd48cbaa"). InnerVolumeSpecName "kube-api-access-g5wb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.130752 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7tbdp\" (UniqueName: \"kubernetes.io/projected/b95fa161-1171-4dc2-b0be-3aa279cb717d-kube-api-access-7tbdp\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.130796 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4b0dbf6-948b-45c4-b5a0-6027f816c873-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.130827 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ff6b200-7364-4e13-956d-628abd48cbaa-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.130836 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5wb4\" (UniqueName: \"kubernetes.io/projected/7ff6b200-7364-4e13-956d-628abd48cbaa-kube-api-access-g5wb4\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.130845 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b95fa161-1171-4dc2-b0be-3aa279cb717d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.130855 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drkqv\" (UniqueName: \"kubernetes.io/projected/c4b0dbf6-948b-45c4-b5a0-6027f816c873-kube-api-access-drkqv\") on node \"crc\" DevicePath \"\"" Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.295502 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-a3ac-account-create-update-phbcc" Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.295516 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-a3ac-account-create-update-phbcc" event={"ID":"c4b0dbf6-948b-45c4-b5a0-6027f816c873","Type":"ContainerDied","Data":"d0ab6bb65e42df4d16f7f2b6be6fe5d45e9f1defc29fc6849b0c8bb5cacb8e34"} Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.295582 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0ab6bb65e42df4d16f7f2b6be6fe5d45e9f1defc29fc6849b0c8bb5cacb8e34" Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.299605 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-4dcc-account-create-update-7fftw" Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.299613 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-4dcc-account-create-update-7fftw" event={"ID":"7ff6b200-7364-4e13-956d-628abd48cbaa","Type":"ContainerDied","Data":"f4cd5135088bb3777d0dc8183607f28098307d3bcb8182377933eaaa9099f247"} Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.299675 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4cd5135088bb3777d0dc8183607f28098307d3bcb8182377933eaaa9099f247" Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.302521 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-1814-account-create-update-nnb6t" event={"ID":"b95fa161-1171-4dc2-b0be-3aa279cb717d","Type":"ContainerDied","Data":"3c229e0a1a59418ea93ecd0ed3eea10a36b0db65c956d12878a6279bf8ef6a06"} Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.302554 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-1814-account-create-update-nnb6t" Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.302580 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c229e0a1a59418ea93ecd0ed3eea10a36b0db65c956d12878a6279bf8ef6a06" Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.305160 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-p9ljs" event={"ID":"112204a1-12d6-49b5-b97e-de4daab49dcf","Type":"ContainerDied","Data":"1e32ef65ab7f89fb7990abe8d40495bef02031f8258cd2be4cdb5fd231e0255d"} Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.305217 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e32ef65ab7f89fb7990abe8d40495bef02031f8258cd2be4cdb5fd231e0255d" Jan 23 14:33:43 crc kubenswrapper[4775]: I0123 14:33:43.305357 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-p9ljs" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.641556 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 23 14:33:44 crc kubenswrapper[4775]: E0123 14:33:44.642178 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b95fa161-1171-4dc2-b0be-3aa279cb717d" containerName="mariadb-account-create-update" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.642191 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="b95fa161-1171-4dc2-b0be-3aa279cb717d" containerName="mariadb-account-create-update" Jan 23 14:33:44 crc kubenswrapper[4775]: E0123 14:33:44.642206 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4b0dbf6-948b-45c4-b5a0-6027f816c873" containerName="mariadb-account-create-update" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.642212 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4b0dbf6-948b-45c4-b5a0-6027f816c873" containerName="mariadb-account-create-update" Jan 23 14:33:44 crc kubenswrapper[4775]: E0123 14:33:44.642223 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e86f57ad-0eba-4794-8f64-f70609e535e8" containerName="mariadb-database-create" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.642229 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="e86f57ad-0eba-4794-8f64-f70609e535e8" containerName="mariadb-database-create" Jan 23 14:33:44 crc kubenswrapper[4775]: E0123 14:33:44.642242 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="112204a1-12d6-49b5-b97e-de4daab49dcf" containerName="mariadb-database-create" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.642248 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="112204a1-12d6-49b5-b97e-de4daab49dcf" containerName="mariadb-database-create" Jan 23 14:33:44 crc kubenswrapper[4775]: E0123 14:33:44.642258 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c" containerName="mariadb-database-create" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.642265 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c" containerName="mariadb-database-create" Jan 23 14:33:44 crc kubenswrapper[4775]: E0123 14:33:44.642274 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ff6b200-7364-4e13-956d-628abd48cbaa" containerName="mariadb-account-create-update" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.642280 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ff6b200-7364-4e13-956d-628abd48cbaa" containerName="mariadb-account-create-update" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.642414 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="e86f57ad-0eba-4794-8f64-f70609e535e8" containerName="mariadb-database-create" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.642428 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="112204a1-12d6-49b5-b97e-de4daab49dcf" containerName="mariadb-database-create" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.642436 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ff6b200-7364-4e13-956d-628abd48cbaa" containerName="mariadb-account-create-update" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.642445 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c" containerName="mariadb-database-create" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.642461 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="b95fa161-1171-4dc2-b0be-3aa279cb717d" containerName="mariadb-account-create-update" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.642467 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4b0dbf6-948b-45c4-b5a0-6027f816c873" containerName="mariadb-account-create-update" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.642942 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.644631 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-nova-kuttl-dockercfg-289sx" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.645033 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-compute-fake1-compute-config-data" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.649101 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.714696 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:33:44 crc kubenswrapper[4775]: E0123 14:33:44.714938 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.734864 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.735994 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.745295 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-novncproxy-config-data" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.748087 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.758075 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzqmq\" (UniqueName: \"kubernetes.io/projected/a3bbc7d7-fc9d-490e-9610-55805e5e876c-kube-api-access-vzqmq\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"a3bbc7d7-fc9d-490e-9610-55805e5e876c\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.758228 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3bbc7d7-fc9d-490e-9610-55805e5e876c-config-data\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"a3bbc7d7-fc9d-490e-9610-55805e5e876c\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.859440 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51e63565-a2ef-4d12-af2f-f3dc6c2942d9-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"51e63565-a2ef-4d12-af2f-f3dc6c2942d9\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.859482 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzqmq\" (UniqueName: \"kubernetes.io/projected/a3bbc7d7-fc9d-490e-9610-55805e5e876c-kube-api-access-vzqmq\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"a3bbc7d7-fc9d-490e-9610-55805e5e876c\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.859844 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64gdf\" (UniqueName: \"kubernetes.io/projected/51e63565-a2ef-4d12-af2f-f3dc6c2942d9-kube-api-access-64gdf\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"51e63565-a2ef-4d12-af2f-f3dc6c2942d9\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.859946 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3bbc7d7-fc9d-490e-9610-55805e5e876c-config-data\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"a3bbc7d7-fc9d-490e-9610-55805e5e876c\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.867591 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3bbc7d7-fc9d-490e-9610-55805e5e876c-config-data\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"a3bbc7d7-fc9d-490e-9610-55805e5e876c\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.889195 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzqmq\" (UniqueName: \"kubernetes.io/projected/a3bbc7d7-fc9d-490e-9610-55805e5e876c-kube-api-access-vzqmq\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"a3bbc7d7-fc9d-490e-9610-55805e5e876c\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.960944 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64gdf\" (UniqueName: \"kubernetes.io/projected/51e63565-a2ef-4d12-af2f-f3dc6c2942d9-kube-api-access-64gdf\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"51e63565-a2ef-4d12-af2f-f3dc6c2942d9\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.961038 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51e63565-a2ef-4d12-af2f-f3dc6c2942d9-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"51e63565-a2ef-4d12-af2f-f3dc6c2942d9\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.965646 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51e63565-a2ef-4d12-af2f-f3dc6c2942d9-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"51e63565-a2ef-4d12-af2f-f3dc6c2942d9\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:33:44 crc kubenswrapper[4775]: I0123 14:33:44.992725 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:33:45 crc kubenswrapper[4775]: I0123 14:33:45.008150 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64gdf\" (UniqueName: \"kubernetes.io/projected/51e63565-a2ef-4d12-af2f-f3dc6c2942d9-kube-api-access-64gdf\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"51e63565-a2ef-4d12-af2f-f3dc6c2942d9\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:33:45 crc kubenswrapper[4775]: I0123 14:33:45.054723 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:33:45 crc kubenswrapper[4775]: I0123 14:33:45.472358 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 23 14:33:45 crc kubenswrapper[4775]: I0123 14:33:45.478549 4775 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 14:33:45 crc kubenswrapper[4775]: I0123 14:33:45.588222 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 23 14:33:45 crc kubenswrapper[4775]: W0123 14:33:45.591901 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51e63565_a2ef_4d12_af2f_f3dc6c2942d9.slice/crio-176dcff14ce2e75b9b75fea74f3c3fe40830311cc826cb992f71f0968d9bd274 WatchSource:0}: Error finding container 176dcff14ce2e75b9b75fea74f3c3fe40830311cc826cb992f71f0968d9bd274: Status 404 returned error can't find the container with id 176dcff14ce2e75b9b75fea74f3c3fe40830311cc826cb992f71f0968d9bd274 Jan 23 14:33:45 crc kubenswrapper[4775]: I0123 14:33:45.959382 4775 scope.go:117] "RemoveContainer" containerID="4198c894ee5e56e286b0cbfe28fec2b93833db9cb46297fad57dce94d57cabf9" Jan 23 14:33:45 crc kubenswrapper[4775]: I0123 14:33:45.983865 4775 scope.go:117] "RemoveContainer" containerID="a2f2a732f030cd4d4d5df85398503f60726ce73a20188125433f4f1e1c54a86f" Jan 23 14:33:46 crc kubenswrapper[4775]: I0123 14:33:46.013392 4775 scope.go:117] "RemoveContainer" containerID="45eb281a90784378326e137fb73e4ed8e5e8582744a86eeaf4ee707b7c73c128" Jan 23 14:33:46 crc kubenswrapper[4775]: I0123 14:33:46.048646 4775 scope.go:117] "RemoveContainer" containerID="4416e85269b1c4f191cdc1bfa52a3e5ae7f058b4bf7a7282d8bc2d3b5f93f115" Jan 23 14:33:46 crc kubenswrapper[4775]: I0123 14:33:46.106983 4775 scope.go:117] "RemoveContainer" containerID="750eb99745aee2f0e8dca16ba12e68de151eeb1758e4a96888cb2f880483b793" Jan 23 14:33:46 crc kubenswrapper[4775]: I0123 14:33:46.123022 4775 scope.go:117] "RemoveContainer" containerID="711f68f5e6e9927f1844635ae91ffaae80eaf390a5a10c418f40e975d1662c3b" Jan 23 14:33:46 crc kubenswrapper[4775]: I0123 14:33:46.160211 4775 scope.go:117] "RemoveContainer" containerID="204b70c75b108eb876b17c40860b15870affa382adc84f2a27cb048cf9061fa7" Jan 23 14:33:46 crc kubenswrapper[4775]: I0123 14:33:46.352920 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"a3bbc7d7-fc9d-490e-9610-55805e5e876c","Type":"ContainerStarted","Data":"3b893ae1dbc88ba1326e6a0a0bd54925381cdc400ec55f87f58040e0b56c3ac3"} Jan 23 14:33:46 crc kubenswrapper[4775]: I0123 14:33:46.359019 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"51e63565-a2ef-4d12-af2f-f3dc6c2942d9","Type":"ContainerStarted","Data":"adde5b85c57c8932f4247945dfd19a8b18268f554e69fc71d0caf9b3c97cbb35"} Jan 23 14:33:46 crc kubenswrapper[4775]: I0123 14:33:46.359073 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"51e63565-a2ef-4d12-af2f-f3dc6c2942d9","Type":"ContainerStarted","Data":"176dcff14ce2e75b9b75fea74f3c3fe40830311cc826cb992f71f0968d9bd274"} Jan 23 14:33:46 crc kubenswrapper[4775]: I0123 14:33:46.383302 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podStartSLOduration=2.38318019 podStartE2EDuration="2.38318019s" podCreationTimestamp="2026-01-23 14:33:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:33:46.374074573 +0000 UTC m=+1773.368903313" watchObservedRunningTime="2026-01-23 14:33:46.38318019 +0000 UTC m=+1773.378008930" Jan 23 14:33:50 crc kubenswrapper[4775]: I0123 14:33:50.055268 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:33:55 crc kubenswrapper[4775]: I0123 14:33:55.055621 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:33:55 crc kubenswrapper[4775]: I0123 14:33:55.075322 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:33:55 crc kubenswrapper[4775]: I0123 14:33:55.449937 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:33:57 crc kubenswrapper[4775]: I0123 14:33:57.040891 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/root-account-create-update-6bcp5"] Jan 23 14:33:57 crc kubenswrapper[4775]: I0123 14:33:57.055419 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/root-account-create-update-6bcp5"] Jan 23 14:33:57 crc kubenswrapper[4775]: I0123 14:33:57.456394 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"a3bbc7d7-fc9d-490e-9610-55805e5e876c","Type":"ContainerStarted","Data":"c947db4e331433b229677ab1076193fcb4125ba5042f6359b54fe32fa2db3874"} Jan 23 14:33:57 crc kubenswrapper[4775]: I0123 14:33:57.456683 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:33:57 crc kubenswrapper[4775]: I0123 14:33:57.479508 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podStartSLOduration=2.696730101 podStartE2EDuration="13.479491525s" podCreationTimestamp="2026-01-23 14:33:44 +0000 UTC" firstStartedPulling="2026-01-23 14:33:45.478173156 +0000 UTC m=+1772.473001906" lastFinishedPulling="2026-01-23 14:33:56.26093455 +0000 UTC m=+1783.255763330" observedRunningTime="2026-01-23 14:33:57.474256357 +0000 UTC m=+1784.469085187" watchObservedRunningTime="2026-01-23 14:33:57.479491525 +0000 UTC m=+1784.474320275" Jan 23 14:33:57 crc kubenswrapper[4775]: I0123 14:33:57.508673 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:33:57 crc kubenswrapper[4775]: I0123 14:33:57.749177 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccc48032-9af5-4d79-bc89-f7d576911b23" path="/var/lib/kubelet/pods/ccc48032-9af5-4d79-bc89-f7d576911b23/volumes" Jan 23 14:33:58 crc kubenswrapper[4775]: I0123 14:33:58.714400 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:33:58 crc kubenswrapper[4775]: E0123 14:33:58.714979 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:34:01 crc kubenswrapper[4775]: I0123 14:34:01.636413 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5"] Jan 23 14:34:01 crc kubenswrapper[4775]: I0123 14:34:01.644944 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sq2k5"] Jan 23 14:34:01 crc kubenswrapper[4775]: I0123 14:34:01.669495 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l"] Jan 23 14:34:01 crc kubenswrapper[4775]: I0123 14:34:01.676726 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-lcg7l"] Jan 23 14:34:01 crc kubenswrapper[4775]: I0123 14:34:01.723125 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b47b9373-0dd5-4635-a8f9-06aa0fc60174" path="/var/lib/kubelet/pods/b47b9373-0dd5-4635-a8f9-06aa0fc60174/volumes" Jan 23 14:34:01 crc kubenswrapper[4775]: I0123 14:34:01.723763 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4701d5c-309d-4969-852b-83626330e0df" path="/var/lib/kubelet/pods/c4701d5c-309d-4969-852b-83626330e0df/volumes" Jan 23 14:34:01 crc kubenswrapper[4775]: I0123 14:34:01.812160 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-svgzc"] Jan 23 14:34:01 crc kubenswrapper[4775]: I0123 14:34:01.814116 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-svgzc" Jan 23 14:34:01 crc kubenswrapper[4775]: I0123 14:34:01.816188 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-scripts" Jan 23 14:34:01 crc kubenswrapper[4775]: I0123 14:34:01.817516 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 23 14:34:01 crc kubenswrapper[4775]: I0123 14:34:01.829412 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-svgzc"] Jan 23 14:34:01 crc kubenswrapper[4775]: I0123 14:34:01.878195 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855"] Jan 23 14:34:01 crc kubenswrapper[4775]: I0123 14:34:01.879418 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855" Jan 23 14:34:01 crc kubenswrapper[4775]: I0123 14:34:01.882065 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-scripts" Jan 23 14:34:01 crc kubenswrapper[4775]: I0123 14:34:01.885123 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 23 14:34:01 crc kubenswrapper[4775]: I0123 14:34:01.887660 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855"] Jan 23 14:34:01 crc kubenswrapper[4775]: I0123 14:34:01.918329 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/004165d0-70f3-4e04-8f77-1342a98147bb-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-svgzc\" (UID: \"004165d0-70f3-4e04-8f77-1342a98147bb\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-svgzc" Jan 23 14:34:01 crc kubenswrapper[4775]: I0123 14:34:01.918401 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/004165d0-70f3-4e04-8f77-1342a98147bb-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-svgzc\" (UID: \"004165d0-70f3-4e04-8f77-1342a98147bb\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-svgzc" Jan 23 14:34:01 crc kubenswrapper[4775]: I0123 14:34:01.918482 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f25e3b63-3402-4d38-8f18-e4f015797854-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-hr855\" (UID: \"f25e3b63-3402-4d38-8f18-e4f015797854\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855" Jan 23 14:34:01 crc kubenswrapper[4775]: I0123 14:34:01.918525 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f25e3b63-3402-4d38-8f18-e4f015797854-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-hr855\" (UID: \"f25e3b63-3402-4d38-8f18-e4f015797854\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855" Jan 23 14:34:01 crc kubenswrapper[4775]: I0123 14:34:01.918561 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg8rb\" (UniqueName: \"kubernetes.io/projected/004165d0-70f3-4e04-8f77-1342a98147bb-kube-api-access-wg8rb\") pod \"nova-kuttl-cell1-conductor-db-sync-svgzc\" (UID: \"004165d0-70f3-4e04-8f77-1342a98147bb\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-svgzc" Jan 23 14:34:01 crc kubenswrapper[4775]: I0123 14:34:01.918606 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsd26\" (UniqueName: \"kubernetes.io/projected/f25e3b63-3402-4d38-8f18-e4f015797854-kube-api-access-fsd26\") pod \"nova-kuttl-cell0-conductor-db-sync-hr855\" (UID: \"f25e3b63-3402-4d38-8f18-e4f015797854\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855" Jan 23 14:34:02 crc kubenswrapper[4775]: I0123 14:34:02.019662 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/004165d0-70f3-4e04-8f77-1342a98147bb-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-svgzc\" (UID: \"004165d0-70f3-4e04-8f77-1342a98147bb\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-svgzc" Jan 23 14:34:02 crc kubenswrapper[4775]: I0123 14:34:02.019705 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/004165d0-70f3-4e04-8f77-1342a98147bb-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-svgzc\" (UID: \"004165d0-70f3-4e04-8f77-1342a98147bb\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-svgzc" Jan 23 14:34:02 crc kubenswrapper[4775]: I0123 14:34:02.019750 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f25e3b63-3402-4d38-8f18-e4f015797854-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-hr855\" (UID: \"f25e3b63-3402-4d38-8f18-e4f015797854\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855" Jan 23 14:34:02 crc kubenswrapper[4775]: I0123 14:34:02.019774 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f25e3b63-3402-4d38-8f18-e4f015797854-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-hr855\" (UID: \"f25e3b63-3402-4d38-8f18-e4f015797854\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855" Jan 23 14:34:02 crc kubenswrapper[4775]: I0123 14:34:02.019794 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg8rb\" (UniqueName: \"kubernetes.io/projected/004165d0-70f3-4e04-8f77-1342a98147bb-kube-api-access-wg8rb\") pod \"nova-kuttl-cell1-conductor-db-sync-svgzc\" (UID: \"004165d0-70f3-4e04-8f77-1342a98147bb\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-svgzc" Jan 23 14:34:02 crc kubenswrapper[4775]: I0123 14:34:02.019835 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsd26\" (UniqueName: \"kubernetes.io/projected/f25e3b63-3402-4d38-8f18-e4f015797854-kube-api-access-fsd26\") pod \"nova-kuttl-cell0-conductor-db-sync-hr855\" (UID: \"f25e3b63-3402-4d38-8f18-e4f015797854\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855" Jan 23 14:34:02 crc kubenswrapper[4775]: I0123 14:34:02.028154 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/004165d0-70f3-4e04-8f77-1342a98147bb-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-svgzc\" (UID: \"004165d0-70f3-4e04-8f77-1342a98147bb\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-svgzc" Jan 23 14:34:02 crc kubenswrapper[4775]: I0123 14:34:02.029107 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/004165d0-70f3-4e04-8f77-1342a98147bb-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-svgzc\" (UID: \"004165d0-70f3-4e04-8f77-1342a98147bb\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-svgzc" Jan 23 14:34:02 crc kubenswrapper[4775]: I0123 14:34:02.031639 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f25e3b63-3402-4d38-8f18-e4f015797854-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-hr855\" (UID: \"f25e3b63-3402-4d38-8f18-e4f015797854\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855" Jan 23 14:34:02 crc kubenswrapper[4775]: I0123 14:34:02.032613 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f25e3b63-3402-4d38-8f18-e4f015797854-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-hr855\" (UID: \"f25e3b63-3402-4d38-8f18-e4f015797854\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855" Jan 23 14:34:02 crc kubenswrapper[4775]: I0123 14:34:02.039401 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsd26\" (UniqueName: \"kubernetes.io/projected/f25e3b63-3402-4d38-8f18-e4f015797854-kube-api-access-fsd26\") pod \"nova-kuttl-cell0-conductor-db-sync-hr855\" (UID: \"f25e3b63-3402-4d38-8f18-e4f015797854\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855" Jan 23 14:34:02 crc kubenswrapper[4775]: I0123 14:34:02.041711 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg8rb\" (UniqueName: \"kubernetes.io/projected/004165d0-70f3-4e04-8f77-1342a98147bb-kube-api-access-wg8rb\") pod \"nova-kuttl-cell1-conductor-db-sync-svgzc\" (UID: \"004165d0-70f3-4e04-8f77-1342a98147bb\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-svgzc" Jan 23 14:34:02 crc kubenswrapper[4775]: I0123 14:34:02.129620 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-svgzc" Jan 23 14:34:02 crc kubenswrapper[4775]: I0123 14:34:02.192915 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855" Jan 23 14:34:02 crc kubenswrapper[4775]: I0123 14:34:02.632362 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-svgzc"] Jan 23 14:34:02 crc kubenswrapper[4775]: W0123 14:34:02.642281 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod004165d0_70f3_4e04_8f77_1342a98147bb.slice/crio-e9b71b7be1b179203948c5a4118fb37e9e60019ad02696027e31503455e674d4 WatchSource:0}: Error finding container e9b71b7be1b179203948c5a4118fb37e9e60019ad02696027e31503455e674d4: Status 404 returned error can't find the container with id e9b71b7be1b179203948c5a4118fb37e9e60019ad02696027e31503455e674d4 Jan 23 14:34:02 crc kubenswrapper[4775]: I0123 14:34:02.693576 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855"] Jan 23 14:34:03 crc kubenswrapper[4775]: I0123 14:34:03.517469 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855" event={"ID":"f25e3b63-3402-4d38-8f18-e4f015797854","Type":"ContainerStarted","Data":"6d9268bfe9748ec6624655bc60aabe83c7ae7e713292756baef52641a7e4c393"} Jan 23 14:34:03 crc kubenswrapper[4775]: I0123 14:34:03.518077 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855" event={"ID":"f25e3b63-3402-4d38-8f18-e4f015797854","Type":"ContainerStarted","Data":"c1c070e8bc953626ab6530de0fd2da83e1ce87a1fc04dcf6d9efec5bbccb4de5"} Jan 23 14:34:03 crc kubenswrapper[4775]: I0123 14:34:03.520525 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-svgzc" event={"ID":"004165d0-70f3-4e04-8f77-1342a98147bb","Type":"ContainerStarted","Data":"2a4347263630b9bca7d3c8fbb1ac8953b6f41d8acd21d8aebe8a8fad3474db05"} Jan 23 14:34:03 crc kubenswrapper[4775]: I0123 14:34:03.520564 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-svgzc" event={"ID":"004165d0-70f3-4e04-8f77-1342a98147bb","Type":"ContainerStarted","Data":"e9b71b7be1b179203948c5a4118fb37e9e60019ad02696027e31503455e674d4"} Jan 23 14:34:03 crc kubenswrapper[4775]: I0123 14:34:03.543347 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855" podStartSLOduration=2.543331343 podStartE2EDuration="2.543331343s" podCreationTimestamp="2026-01-23 14:34:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:34:03.533066794 +0000 UTC m=+1790.527895574" watchObservedRunningTime="2026-01-23 14:34:03.543331343 +0000 UTC m=+1790.538160073" Jan 23 14:34:05 crc kubenswrapper[4775]: I0123 14:34:05.537739 4775 generic.go:334] "Generic (PLEG): container finished" podID="004165d0-70f3-4e04-8f77-1342a98147bb" containerID="2a4347263630b9bca7d3c8fbb1ac8953b6f41d8acd21d8aebe8a8fad3474db05" exitCode=0 Jan 23 14:34:05 crc kubenswrapper[4775]: I0123 14:34:05.538137 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-svgzc" event={"ID":"004165d0-70f3-4e04-8f77-1342a98147bb","Type":"ContainerDied","Data":"2a4347263630b9bca7d3c8fbb1ac8953b6f41d8acd21d8aebe8a8fad3474db05"} Jan 23 14:34:06 crc kubenswrapper[4775]: I0123 14:34:06.032049 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-db-sync-2qsr9"] Jan 23 14:34:06 crc kubenswrapper[4775]: I0123 14:34:06.049430 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-db-sync-2qsr9"] Jan 23 14:34:06 crc kubenswrapper[4775]: I0123 14:34:06.878937 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-svgzc" Jan 23 14:34:06 crc kubenswrapper[4775]: I0123 14:34:06.919479 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wg8rb\" (UniqueName: \"kubernetes.io/projected/004165d0-70f3-4e04-8f77-1342a98147bb-kube-api-access-wg8rb\") pod \"004165d0-70f3-4e04-8f77-1342a98147bb\" (UID: \"004165d0-70f3-4e04-8f77-1342a98147bb\") " Jan 23 14:34:06 crc kubenswrapper[4775]: I0123 14:34:06.919572 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/004165d0-70f3-4e04-8f77-1342a98147bb-config-data\") pod \"004165d0-70f3-4e04-8f77-1342a98147bb\" (UID: \"004165d0-70f3-4e04-8f77-1342a98147bb\") " Jan 23 14:34:06 crc kubenswrapper[4775]: I0123 14:34:06.919678 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/004165d0-70f3-4e04-8f77-1342a98147bb-scripts\") pod \"004165d0-70f3-4e04-8f77-1342a98147bb\" (UID: \"004165d0-70f3-4e04-8f77-1342a98147bb\") " Jan 23 14:34:06 crc kubenswrapper[4775]: I0123 14:34:06.937115 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/004165d0-70f3-4e04-8f77-1342a98147bb-kube-api-access-wg8rb" (OuterVolumeSpecName: "kube-api-access-wg8rb") pod "004165d0-70f3-4e04-8f77-1342a98147bb" (UID: "004165d0-70f3-4e04-8f77-1342a98147bb"). InnerVolumeSpecName "kube-api-access-wg8rb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:34:06 crc kubenswrapper[4775]: I0123 14:34:06.942730 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/004165d0-70f3-4e04-8f77-1342a98147bb-scripts" (OuterVolumeSpecName: "scripts") pod "004165d0-70f3-4e04-8f77-1342a98147bb" (UID: "004165d0-70f3-4e04-8f77-1342a98147bb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:34:06 crc kubenswrapper[4775]: I0123 14:34:06.973972 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/004165d0-70f3-4e04-8f77-1342a98147bb-config-data" (OuterVolumeSpecName: "config-data") pod "004165d0-70f3-4e04-8f77-1342a98147bb" (UID: "004165d0-70f3-4e04-8f77-1342a98147bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:34:07 crc kubenswrapper[4775]: I0123 14:34:07.021571 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wg8rb\" (UniqueName: \"kubernetes.io/projected/004165d0-70f3-4e04-8f77-1342a98147bb-kube-api-access-wg8rb\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:07 crc kubenswrapper[4775]: I0123 14:34:07.021606 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/004165d0-70f3-4e04-8f77-1342a98147bb-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:07 crc kubenswrapper[4775]: I0123 14:34:07.021620 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/004165d0-70f3-4e04-8f77-1342a98147bb-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:07 crc kubenswrapper[4775]: I0123 14:34:07.559943 4775 generic.go:334] "Generic (PLEG): container finished" podID="f25e3b63-3402-4d38-8f18-e4f015797854" containerID="6d9268bfe9748ec6624655bc60aabe83c7ae7e713292756baef52641a7e4c393" exitCode=0 Jan 23 14:34:07 crc kubenswrapper[4775]: I0123 14:34:07.560041 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855" event={"ID":"f25e3b63-3402-4d38-8f18-e4f015797854","Type":"ContainerDied","Data":"6d9268bfe9748ec6624655bc60aabe83c7ae7e713292756baef52641a7e4c393"} Jan 23 14:34:07 crc kubenswrapper[4775]: I0123 14:34:07.562927 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-svgzc" event={"ID":"004165d0-70f3-4e04-8f77-1342a98147bb","Type":"ContainerDied","Data":"e9b71b7be1b179203948c5a4118fb37e9e60019ad02696027e31503455e674d4"} Jan 23 14:34:07 crc kubenswrapper[4775]: I0123 14:34:07.562965 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9b71b7be1b179203948c5a4118fb37e9e60019ad02696027e31503455e674d4" Jan 23 14:34:07 crc kubenswrapper[4775]: I0123 14:34:07.563084 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-svgzc" Jan 23 14:34:07 crc kubenswrapper[4775]: I0123 14:34:07.730248 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c017749-eae9-4edd-91eb-21b25275a986" path="/var/lib/kubelet/pods/2c017749-eae9-4edd-91eb-21b25275a986/volumes" Jan 23 14:34:07 crc kubenswrapper[4775]: I0123 14:34:07.997137 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:34:07 crc kubenswrapper[4775]: E0123 14:34:07.997527 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="004165d0-70f3-4e04-8f77-1342a98147bb" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 23 14:34:07 crc kubenswrapper[4775]: I0123 14:34:07.997544 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="004165d0-70f3-4e04-8f77-1342a98147bb" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 23 14:34:07 crc kubenswrapper[4775]: I0123 14:34:07.997755 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="004165d0-70f3-4e04-8f77-1342a98147bb" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 23 14:34:07 crc kubenswrapper[4775]: I0123 14:34:07.998565 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:34:08 crc kubenswrapper[4775]: I0123 14:34:08.001631 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 23 14:34:08 crc kubenswrapper[4775]: I0123 14:34:08.032111 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:34:08 crc kubenswrapper[4775]: I0123 14:34:08.146895 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hj9j\" (UniqueName: \"kubernetes.io/projected/6bcae715-33d1-4c44-9a33-f617c489dd8c-kube-api-access-7hj9j\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"6bcae715-33d1-4c44-9a33-f617c489dd8c\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:34:08 crc kubenswrapper[4775]: I0123 14:34:08.147038 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bcae715-33d1-4c44-9a33-f617c489dd8c-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"6bcae715-33d1-4c44-9a33-f617c489dd8c\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:34:08 crc kubenswrapper[4775]: I0123 14:34:08.248967 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hj9j\" (UniqueName: \"kubernetes.io/projected/6bcae715-33d1-4c44-9a33-f617c489dd8c-kube-api-access-7hj9j\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"6bcae715-33d1-4c44-9a33-f617c489dd8c\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:34:08 crc kubenswrapper[4775]: I0123 14:34:08.249146 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bcae715-33d1-4c44-9a33-f617c489dd8c-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"6bcae715-33d1-4c44-9a33-f617c489dd8c\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:34:08 crc kubenswrapper[4775]: I0123 14:34:08.262222 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bcae715-33d1-4c44-9a33-f617c489dd8c-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"6bcae715-33d1-4c44-9a33-f617c489dd8c\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:34:08 crc kubenswrapper[4775]: I0123 14:34:08.273691 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hj9j\" (UniqueName: \"kubernetes.io/projected/6bcae715-33d1-4c44-9a33-f617c489dd8c-kube-api-access-7hj9j\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"6bcae715-33d1-4c44-9a33-f617c489dd8c\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:34:08 crc kubenswrapper[4775]: I0123 14:34:08.322493 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:34:08 crc kubenswrapper[4775]: I0123 14:34:08.560924 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:34:08 crc kubenswrapper[4775]: I0123 14:34:08.577167 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"6bcae715-33d1-4c44-9a33-f617c489dd8c","Type":"ContainerStarted","Data":"993d5972eb5c6f4c100b944f0126ed4f2e54f4d9412dabbd89c853572013d71a"} Jan 23 14:34:08 crc kubenswrapper[4775]: I0123 14:34:08.828553 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855" Jan 23 14:34:08 crc kubenswrapper[4775]: I0123 14:34:08.870298 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsd26\" (UniqueName: \"kubernetes.io/projected/f25e3b63-3402-4d38-8f18-e4f015797854-kube-api-access-fsd26\") pod \"f25e3b63-3402-4d38-8f18-e4f015797854\" (UID: \"f25e3b63-3402-4d38-8f18-e4f015797854\") " Jan 23 14:34:08 crc kubenswrapper[4775]: I0123 14:34:08.870376 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f25e3b63-3402-4d38-8f18-e4f015797854-scripts\") pod \"f25e3b63-3402-4d38-8f18-e4f015797854\" (UID: \"f25e3b63-3402-4d38-8f18-e4f015797854\") " Jan 23 14:34:08 crc kubenswrapper[4775]: I0123 14:34:08.870405 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f25e3b63-3402-4d38-8f18-e4f015797854-config-data\") pod \"f25e3b63-3402-4d38-8f18-e4f015797854\" (UID: \"f25e3b63-3402-4d38-8f18-e4f015797854\") " Jan 23 14:34:08 crc kubenswrapper[4775]: I0123 14:34:08.888934 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f25e3b63-3402-4d38-8f18-e4f015797854-scripts" (OuterVolumeSpecName: "scripts") pod "f25e3b63-3402-4d38-8f18-e4f015797854" (UID: "f25e3b63-3402-4d38-8f18-e4f015797854"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:34:08 crc kubenswrapper[4775]: I0123 14:34:08.893140 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f25e3b63-3402-4d38-8f18-e4f015797854-kube-api-access-fsd26" (OuterVolumeSpecName: "kube-api-access-fsd26") pod "f25e3b63-3402-4d38-8f18-e4f015797854" (UID: "f25e3b63-3402-4d38-8f18-e4f015797854"). InnerVolumeSpecName "kube-api-access-fsd26". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:34:08 crc kubenswrapper[4775]: I0123 14:34:08.904098 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f25e3b63-3402-4d38-8f18-e4f015797854-config-data" (OuterVolumeSpecName: "config-data") pod "f25e3b63-3402-4d38-8f18-e4f015797854" (UID: "f25e3b63-3402-4d38-8f18-e4f015797854"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:34:08 crc kubenswrapper[4775]: I0123 14:34:08.973024 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f25e3b63-3402-4d38-8f18-e4f015797854-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:08 crc kubenswrapper[4775]: I0123 14:34:08.973083 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f25e3b63-3402-4d38-8f18-e4f015797854-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:08 crc kubenswrapper[4775]: I0123 14:34:08.973109 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsd26\" (UniqueName: \"kubernetes.io/projected/f25e3b63-3402-4d38-8f18-e4f015797854-kube-api-access-fsd26\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:09 crc kubenswrapper[4775]: I0123 14:34:09.599980 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"6bcae715-33d1-4c44-9a33-f617c489dd8c","Type":"ContainerStarted","Data":"00f657f92e0b5f8eeea6508bbfb05372a2ce4865e934064fa0ca5e2ac689ab30"} Jan 23 14:34:09 crc kubenswrapper[4775]: I0123 14:34:09.600448 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:34:09 crc kubenswrapper[4775]: I0123 14:34:09.602607 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855" event={"ID":"f25e3b63-3402-4d38-8f18-e4f015797854","Type":"ContainerDied","Data":"c1c070e8bc953626ab6530de0fd2da83e1ce87a1fc04dcf6d9efec5bbccb4de5"} Jan 23 14:34:09 crc kubenswrapper[4775]: I0123 14:34:09.602639 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1c070e8bc953626ab6530de0fd2da83e1ce87a1fc04dcf6d9efec5bbccb4de5" Jan 23 14:34:09 crc kubenswrapper[4775]: I0123 14:34:09.602732 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855" Jan 23 14:34:09 crc kubenswrapper[4775]: I0123 14:34:09.639042 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podStartSLOduration=2.6390054 podStartE2EDuration="2.6390054s" podCreationTimestamp="2026-01-23 14:34:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:34:09.622216456 +0000 UTC m=+1796.617045206" watchObservedRunningTime="2026-01-23 14:34:09.6390054 +0000 UTC m=+1796.633834180" Jan 23 14:34:09 crc kubenswrapper[4775]: I0123 14:34:09.675608 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:34:09 crc kubenswrapper[4775]: E0123 14:34:09.676068 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f25e3b63-3402-4d38-8f18-e4f015797854" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 23 14:34:09 crc kubenswrapper[4775]: I0123 14:34:09.676091 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f25e3b63-3402-4d38-8f18-e4f015797854" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 23 14:34:09 crc kubenswrapper[4775]: I0123 14:34:09.676280 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f25e3b63-3402-4d38-8f18-e4f015797854" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 23 14:34:09 crc kubenswrapper[4775]: I0123 14:34:09.676970 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:34:09 crc kubenswrapper[4775]: I0123 14:34:09.679271 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 23 14:34:09 crc kubenswrapper[4775]: I0123 14:34:09.683219 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqtbj\" (UniqueName: \"kubernetes.io/projected/7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa-kube-api-access-hqtbj\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:34:09 crc kubenswrapper[4775]: I0123 14:34:09.683321 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:34:09 crc kubenswrapper[4775]: I0123 14:34:09.684021 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:34:09 crc kubenswrapper[4775]: I0123 14:34:09.792517 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:34:09 crc kubenswrapper[4775]: I0123 14:34:09.792653 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqtbj\" (UniqueName: \"kubernetes.io/projected/7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa-kube-api-access-hqtbj\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:34:09 crc kubenswrapper[4775]: I0123 14:34:09.798175 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:34:09 crc kubenswrapper[4775]: I0123 14:34:09.823251 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqtbj\" (UniqueName: \"kubernetes.io/projected/7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa-kube-api-access-hqtbj\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:34:10 crc kubenswrapper[4775]: I0123 14:34:10.004215 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:34:10 crc kubenswrapper[4775]: I0123 14:34:10.267064 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:34:10 crc kubenswrapper[4775]: W0123 14:34:10.275944 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7b007ba6_d3ea_4b9d_b325_3ffabb38bdfa.slice/crio-862d714ec5d72fa2cecc76c787b92a298898df37e1f6457c744d6aed52ae7549 WatchSource:0}: Error finding container 862d714ec5d72fa2cecc76c787b92a298898df37e1f6457c744d6aed52ae7549: Status 404 returned error can't find the container with id 862d714ec5d72fa2cecc76c787b92a298898df37e1f6457c744d6aed52ae7549 Jan 23 14:34:10 crc kubenswrapper[4775]: I0123 14:34:10.611957 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa","Type":"ContainerStarted","Data":"862d714ec5d72fa2cecc76c787b92a298898df37e1f6457c744d6aed52ae7549"} Jan 23 14:34:11 crc kubenswrapper[4775]: I0123 14:34:11.622057 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa","Type":"ContainerStarted","Data":"8ebbe7df337eed7eec1cd0d49f40ddb05c909061e66825a9f581a0ea754192e7"} Jan 23 14:34:11 crc kubenswrapper[4775]: I0123 14:34:11.623539 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:34:12 crc kubenswrapper[4775]: I0123 14:34:12.713905 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:34:12 crc kubenswrapper[4775]: E0123 14:34:12.714433 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:34:13 crc kubenswrapper[4775]: I0123 14:34:13.359254 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:34:13 crc kubenswrapper[4775]: I0123 14:34:13.378498 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podStartSLOduration=4.37848056 podStartE2EDuration="4.37848056s" podCreationTimestamp="2026-01-23 14:34:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:34:11.657408272 +0000 UTC m=+1798.652237012" watchObservedRunningTime="2026-01-23 14:34:13.37848056 +0000 UTC m=+1800.373309290" Jan 23 14:34:13 crc kubenswrapper[4775]: I0123 14:34:13.893548 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2"] Jan 23 14:34:13 crc kubenswrapper[4775]: I0123 14:34:13.894667 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2" Jan 23 14:34:13 crc kubenswrapper[4775]: I0123 14:34:13.897358 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-config-data" Jan 23 14:34:13 crc kubenswrapper[4775]: I0123 14:34:13.899542 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-scripts" Jan 23 14:34:13 crc kubenswrapper[4775]: I0123 14:34:13.911530 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2"] Jan 23 14:34:13 crc kubenswrapper[4775]: I0123 14:34:13.932677 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df"] Jan 23 14:34:13 crc kubenswrapper[4775]: I0123 14:34:13.933673 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df" Jan 23 14:34:13 crc kubenswrapper[4775]: I0123 14:34:13.948614 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df"] Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.007655 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec6263e3-855a-48e5-ae77-25462d7e5a13-scripts\") pod \"nova-kuttl-cell1-host-discover-qb9df\" (UID: \"ec6263e3-855a-48e5-ae77-25462d7e5a13\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df" Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.007739 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db75fd7c-ba91-4090-ac20-0009c06598f3-scripts\") pod \"nova-kuttl-cell1-cell-mapping-kmnk2\" (UID: \"db75fd7c-ba91-4090-ac20-0009c06598f3\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2" Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.007775 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r6jr\" (UniqueName: \"kubernetes.io/projected/ec6263e3-855a-48e5-ae77-25462d7e5a13-kube-api-access-4r6jr\") pod \"nova-kuttl-cell1-host-discover-qb9df\" (UID: \"ec6263e3-855a-48e5-ae77-25462d7e5a13\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df" Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.007866 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db75fd7c-ba91-4090-ac20-0009c06598f3-config-data\") pod \"nova-kuttl-cell1-cell-mapping-kmnk2\" (UID: \"db75fd7c-ba91-4090-ac20-0009c06598f3\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2" Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.007942 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdkdk\" (UniqueName: \"kubernetes.io/projected/db75fd7c-ba91-4090-ac20-0009c06598f3-kube-api-access-wdkdk\") pod \"nova-kuttl-cell1-cell-mapping-kmnk2\" (UID: \"db75fd7c-ba91-4090-ac20-0009c06598f3\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2" Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.007976 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec6263e3-855a-48e5-ae77-25462d7e5a13-config-data\") pod \"nova-kuttl-cell1-host-discover-qb9df\" (UID: \"ec6263e3-855a-48e5-ae77-25462d7e5a13\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df" Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.036572 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/placement-db-sync-sgnh6"] Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.050155 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/placement-db-sync-sgnh6"] Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.108912 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdkdk\" (UniqueName: \"kubernetes.io/projected/db75fd7c-ba91-4090-ac20-0009c06598f3-kube-api-access-wdkdk\") pod \"nova-kuttl-cell1-cell-mapping-kmnk2\" (UID: \"db75fd7c-ba91-4090-ac20-0009c06598f3\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2" Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.109188 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec6263e3-855a-48e5-ae77-25462d7e5a13-config-data\") pod \"nova-kuttl-cell1-host-discover-qb9df\" (UID: \"ec6263e3-855a-48e5-ae77-25462d7e5a13\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df" Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.109319 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec6263e3-855a-48e5-ae77-25462d7e5a13-scripts\") pod \"nova-kuttl-cell1-host-discover-qb9df\" (UID: \"ec6263e3-855a-48e5-ae77-25462d7e5a13\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df" Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.109413 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db75fd7c-ba91-4090-ac20-0009c06598f3-scripts\") pod \"nova-kuttl-cell1-cell-mapping-kmnk2\" (UID: \"db75fd7c-ba91-4090-ac20-0009c06598f3\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2" Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.109497 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r6jr\" (UniqueName: \"kubernetes.io/projected/ec6263e3-855a-48e5-ae77-25462d7e5a13-kube-api-access-4r6jr\") pod \"nova-kuttl-cell1-host-discover-qb9df\" (UID: \"ec6263e3-855a-48e5-ae77-25462d7e5a13\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df" Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.109609 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db75fd7c-ba91-4090-ac20-0009c06598f3-config-data\") pod \"nova-kuttl-cell1-cell-mapping-kmnk2\" (UID: \"db75fd7c-ba91-4090-ac20-0009c06598f3\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2" Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.115718 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec6263e3-855a-48e5-ae77-25462d7e5a13-scripts\") pod \"nova-kuttl-cell1-host-discover-qb9df\" (UID: \"ec6263e3-855a-48e5-ae77-25462d7e5a13\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df" Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.117505 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec6263e3-855a-48e5-ae77-25462d7e5a13-config-data\") pod \"nova-kuttl-cell1-host-discover-qb9df\" (UID: \"ec6263e3-855a-48e5-ae77-25462d7e5a13\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df" Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.117682 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db75fd7c-ba91-4090-ac20-0009c06598f3-config-data\") pod \"nova-kuttl-cell1-cell-mapping-kmnk2\" (UID: \"db75fd7c-ba91-4090-ac20-0009c06598f3\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2" Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.118120 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db75fd7c-ba91-4090-ac20-0009c06598f3-scripts\") pod \"nova-kuttl-cell1-cell-mapping-kmnk2\" (UID: \"db75fd7c-ba91-4090-ac20-0009c06598f3\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2" Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.132411 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r6jr\" (UniqueName: \"kubernetes.io/projected/ec6263e3-855a-48e5-ae77-25462d7e5a13-kube-api-access-4r6jr\") pod \"nova-kuttl-cell1-host-discover-qb9df\" (UID: \"ec6263e3-855a-48e5-ae77-25462d7e5a13\") " pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df" Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.132789 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdkdk\" (UniqueName: \"kubernetes.io/projected/db75fd7c-ba91-4090-ac20-0009c06598f3-kube-api-access-wdkdk\") pod \"nova-kuttl-cell1-cell-mapping-kmnk2\" (UID: \"db75fd7c-ba91-4090-ac20-0009c06598f3\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2" Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.210910 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2" Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.247433 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df" Jan 23 14:34:14 crc kubenswrapper[4775]: W0123 14:34:14.674600 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb75fd7c_ba91_4090_ac20_0009c06598f3.slice/crio-0e103c6efda37b7d35d8959b0e0cb3caa3e02a7d8cc645a289fbdc77aaad85e9 WatchSource:0}: Error finding container 0e103c6efda37b7d35d8959b0e0cb3caa3e02a7d8cc645a289fbdc77aaad85e9: Status 404 returned error can't find the container with id 0e103c6efda37b7d35d8959b0e0cb3caa3e02a7d8cc645a289fbdc77aaad85e9 Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.680236 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2"] Jan 23 14:34:14 crc kubenswrapper[4775]: I0123 14:34:14.730777 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df"] Jan 23 14:34:14 crc kubenswrapper[4775]: W0123 14:34:14.733687 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec6263e3_855a_48e5_ae77_25462d7e5a13.slice/crio-77615832a6adcb83e24cbd6ebcba1287a3cc2749704e0310ddcb67c4e48edab3 WatchSource:0}: Error finding container 77615832a6adcb83e24cbd6ebcba1287a3cc2749704e0310ddcb67c4e48edab3: Status 404 returned error can't find the container with id 77615832a6adcb83e24cbd6ebcba1287a3cc2749704e0310ddcb67c4e48edab3 Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.033775 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.493856 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25"] Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.495040 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.498379 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-scripts" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.498545 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-config-data" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.506736 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25"] Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.650311 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snmpp\" (UniqueName: \"kubernetes.io/projected/71a6469b-2bd1-4004-9a3d-c9d87161efab-kube-api-access-snmpp\") pod \"nova-kuttl-cell0-cell-mapping-bvq25\" (UID: \"71a6469b-2bd1-4004-9a3d-c9d87161efab\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.650482 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71a6469b-2bd1-4004-9a3d-c9d87161efab-config-data\") pod \"nova-kuttl-cell0-cell-mapping-bvq25\" (UID: \"71a6469b-2bd1-4004-9a3d-c9d87161efab\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.650571 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71a6469b-2bd1-4004-9a3d-c9d87161efab-scripts\") pod \"nova-kuttl-cell0-cell-mapping-bvq25\" (UID: \"71a6469b-2bd1-4004-9a3d-c9d87161efab\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.662279 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df" event={"ID":"ec6263e3-855a-48e5-ae77-25462d7e5a13","Type":"ContainerStarted","Data":"ba9aa3a2fb38d7f28f8fd65dca65cb5079144b881eab4e42302934720de2c14c"} Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.662326 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df" event={"ID":"ec6263e3-855a-48e5-ae77-25462d7e5a13","Type":"ContainerStarted","Data":"77615832a6adcb83e24cbd6ebcba1287a3cc2749704e0310ddcb67c4e48edab3"} Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.663523 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2" event={"ID":"db75fd7c-ba91-4090-ac20-0009c06598f3","Type":"ContainerStarted","Data":"ed23d1d8c2e578153c70d817dfeffe62e4af30e952a97680b7c773eb23fb2ca1"} Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.663647 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2" event={"ID":"db75fd7c-ba91-4090-ac20-0009c06598f3","Type":"ContainerStarted","Data":"0e103c6efda37b7d35d8959b0e0cb3caa3e02a7d8cc645a289fbdc77aaad85e9"} Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.698674 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df" podStartSLOduration=2.698658871 podStartE2EDuration="2.698658871s" podCreationTimestamp="2026-01-23 14:34:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:34:15.697896569 +0000 UTC m=+1802.692725309" watchObservedRunningTime="2026-01-23 14:34:15.698658871 +0000 UTC m=+1802.693487611" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.714416 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2" podStartSLOduration=2.714401455 podStartE2EDuration="2.714401455s" podCreationTimestamp="2026-01-23 14:34:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:34:15.711003479 +0000 UTC m=+1802.705832209" watchObservedRunningTime="2026-01-23 14:34:15.714401455 +0000 UTC m=+1802.709230195" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.723982 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6" path="/var/lib/kubelet/pods/c22eb7b9-6c07-4edc-a7f7-9e9c4f5acfe6/volumes" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.753020 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snmpp\" (UniqueName: \"kubernetes.io/projected/71a6469b-2bd1-4004-9a3d-c9d87161efab-kube-api-access-snmpp\") pod \"nova-kuttl-cell0-cell-mapping-bvq25\" (UID: \"71a6469b-2bd1-4004-9a3d-c9d87161efab\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.753101 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71a6469b-2bd1-4004-9a3d-c9d87161efab-config-data\") pod \"nova-kuttl-cell0-cell-mapping-bvq25\" (UID: \"71a6469b-2bd1-4004-9a3d-c9d87161efab\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.753290 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71a6469b-2bd1-4004-9a3d-c9d87161efab-scripts\") pod \"nova-kuttl-cell0-cell-mapping-bvq25\" (UID: \"71a6469b-2bd1-4004-9a3d-c9d87161efab\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.759585 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71a6469b-2bd1-4004-9a3d-c9d87161efab-config-data\") pod \"nova-kuttl-cell0-cell-mapping-bvq25\" (UID: \"71a6469b-2bd1-4004-9a3d-c9d87161efab\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.765445 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71a6469b-2bd1-4004-9a3d-c9d87161efab-scripts\") pod \"nova-kuttl-cell0-cell-mapping-bvq25\" (UID: \"71a6469b-2bd1-4004-9a3d-c9d87161efab\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.770766 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snmpp\" (UniqueName: \"kubernetes.io/projected/71a6469b-2bd1-4004-9a3d-c9d87161efab-kube-api-access-snmpp\") pod \"nova-kuttl-cell0-cell-mapping-bvq25\" (UID: \"71a6469b-2bd1-4004-9a3d-c9d87161efab\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.776656 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.785457 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.787270 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.791236 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.797828 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.798796 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.810489 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.832288 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.855268 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55b13561-f097-4a68-bc50-482d017d838d-config-data\") pod \"nova-kuttl-api-0\" (UID: \"55b13561-f097-4a68-bc50-482d017d838d\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.855310 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cdb17e1-4872-47c6-a39d-eac9257959bf-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"9cdb17e1-4872-47c6-a39d-eac9257959bf\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.855354 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqghj\" (UniqueName: \"kubernetes.io/projected/55b13561-f097-4a68-bc50-482d017d838d-kube-api-access-qqghj\") pod \"nova-kuttl-api-0\" (UID: \"55b13561-f097-4a68-bc50-482d017d838d\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.855387 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55b13561-f097-4a68-bc50-482d017d838d-logs\") pod \"nova-kuttl-api-0\" (UID: \"55b13561-f097-4a68-bc50-482d017d838d\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.855419 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnzdc\" (UniqueName: \"kubernetes.io/projected/9cdb17e1-4872-47c6-a39d-eac9257959bf-kube-api-access-tnzdc\") pod \"nova-kuttl-scheduler-0\" (UID: \"9cdb17e1-4872-47c6-a39d-eac9257959bf\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.860711 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.889252 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.890365 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.896345 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.920655 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.956653 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55b13561-f097-4a68-bc50-482d017d838d-logs\") pod \"nova-kuttl-api-0\" (UID: \"55b13561-f097-4a68-bc50-482d017d838d\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.957051 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnzdc\" (UniqueName: \"kubernetes.io/projected/9cdb17e1-4872-47c6-a39d-eac9257959bf-kube-api-access-tnzdc\") pod \"nova-kuttl-scheduler-0\" (UID: \"9cdb17e1-4872-47c6-a39d-eac9257959bf\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.957190 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55b13561-f097-4a68-bc50-482d017d838d-config-data\") pod \"nova-kuttl-api-0\" (UID: \"55b13561-f097-4a68-bc50-482d017d838d\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.957214 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cdb17e1-4872-47c6-a39d-eac9257959bf-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"9cdb17e1-4872-47c6-a39d-eac9257959bf\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.957248 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqghj\" (UniqueName: \"kubernetes.io/projected/55b13561-f097-4a68-bc50-482d017d838d-kube-api-access-qqghj\") pod \"nova-kuttl-api-0\" (UID: \"55b13561-f097-4a68-bc50-482d017d838d\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.957925 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55b13561-f097-4a68-bc50-482d017d838d-logs\") pod \"nova-kuttl-api-0\" (UID: \"55b13561-f097-4a68-bc50-482d017d838d\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.968406 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cdb17e1-4872-47c6-a39d-eac9257959bf-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"9cdb17e1-4872-47c6-a39d-eac9257959bf\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.975585 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqghj\" (UniqueName: \"kubernetes.io/projected/55b13561-f097-4a68-bc50-482d017d838d-kube-api-access-qqghj\") pod \"nova-kuttl-api-0\" (UID: \"55b13561-f097-4a68-bc50-482d017d838d\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.979441 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55b13561-f097-4a68-bc50-482d017d838d-config-data\") pod \"nova-kuttl-api-0\" (UID: \"55b13561-f097-4a68-bc50-482d017d838d\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:15 crc kubenswrapper[4775]: I0123 14:34:15.998135 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnzdc\" (UniqueName: \"kubernetes.io/projected/9cdb17e1-4872-47c6-a39d-eac9257959bf-kube-api-access-tnzdc\") pod \"nova-kuttl-scheduler-0\" (UID: \"9cdb17e1-4872-47c6-a39d-eac9257959bf\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:16 crc kubenswrapper[4775]: I0123 14:34:16.059080 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e661567d-01ce-42ba-8257-c8a031e45a0f-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"e661567d-01ce-42ba-8257-c8a031e45a0f\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:16 crc kubenswrapper[4775]: I0123 14:34:16.059156 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vfls\" (UniqueName: \"kubernetes.io/projected/e661567d-01ce-42ba-8257-c8a031e45a0f-kube-api-access-9vfls\") pod \"nova-kuttl-metadata-0\" (UID: \"e661567d-01ce-42ba-8257-c8a031e45a0f\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:16 crc kubenswrapper[4775]: I0123 14:34:16.059226 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e661567d-01ce-42ba-8257-c8a031e45a0f-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"e661567d-01ce-42ba-8257-c8a031e45a0f\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:16 crc kubenswrapper[4775]: I0123 14:34:16.149441 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:16 crc kubenswrapper[4775]: I0123 14:34:16.160300 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e661567d-01ce-42ba-8257-c8a031e45a0f-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"e661567d-01ce-42ba-8257-c8a031e45a0f\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:16 crc kubenswrapper[4775]: I0123 14:34:16.160381 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vfls\" (UniqueName: \"kubernetes.io/projected/e661567d-01ce-42ba-8257-c8a031e45a0f-kube-api-access-9vfls\") pod \"nova-kuttl-metadata-0\" (UID: \"e661567d-01ce-42ba-8257-c8a031e45a0f\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:16 crc kubenswrapper[4775]: I0123 14:34:16.160508 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e661567d-01ce-42ba-8257-c8a031e45a0f-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"e661567d-01ce-42ba-8257-c8a031e45a0f\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:16 crc kubenswrapper[4775]: I0123 14:34:16.161195 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e661567d-01ce-42ba-8257-c8a031e45a0f-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"e661567d-01ce-42ba-8257-c8a031e45a0f\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:16 crc kubenswrapper[4775]: I0123 14:34:16.165380 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e661567d-01ce-42ba-8257-c8a031e45a0f-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"e661567d-01ce-42ba-8257-c8a031e45a0f\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:16 crc kubenswrapper[4775]: I0123 14:34:16.166960 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:16 crc kubenswrapper[4775]: I0123 14:34:16.179591 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vfls\" (UniqueName: \"kubernetes.io/projected/e661567d-01ce-42ba-8257-c8a031e45a0f-kube-api-access-9vfls\") pod \"nova-kuttl-metadata-0\" (UID: \"e661567d-01ce-42ba-8257-c8a031e45a0f\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:16 crc kubenswrapper[4775]: I0123 14:34:16.244651 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:16 crc kubenswrapper[4775]: I0123 14:34:16.356396 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25"] Jan 23 14:34:16 crc kubenswrapper[4775]: I0123 14:34:16.671680 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25" event={"ID":"71a6469b-2bd1-4004-9a3d-c9d87161efab","Type":"ContainerStarted","Data":"8338a669e0d43937d5f843231e5fbbed5ec502884f9ba96c38e08d3114af925f"} Jan 23 14:34:16 crc kubenswrapper[4775]: I0123 14:34:16.672141 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25" event={"ID":"71a6469b-2bd1-4004-9a3d-c9d87161efab","Type":"ContainerStarted","Data":"156abf729125e825e482677cd02117e6955b3cd43618d229ad3b86794d80e8f0"} Jan 23 14:34:16 crc kubenswrapper[4775]: I0123 14:34:16.686168 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:34:16 crc kubenswrapper[4775]: I0123 14:34:16.688740 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25" podStartSLOduration=1.6887235760000001 podStartE2EDuration="1.688723576s" podCreationTimestamp="2026-01-23 14:34:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:34:16.688673094 +0000 UTC m=+1803.683501834" watchObservedRunningTime="2026-01-23 14:34:16.688723576 +0000 UTC m=+1803.683552316" Jan 23 14:34:17 crc kubenswrapper[4775]: I0123 14:34:17.390997 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:34:17 crc kubenswrapper[4775]: I0123 14:34:17.404162 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:34:17 crc kubenswrapper[4775]: I0123 14:34:17.695172 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"55b13561-f097-4a68-bc50-482d017d838d","Type":"ContainerStarted","Data":"5c3d0956333cbd83fa5ca67cf1ad79878bf44e3d1a8725af4b0545d4d6530237"} Jan 23 14:34:17 crc kubenswrapper[4775]: I0123 14:34:17.695474 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"55b13561-f097-4a68-bc50-482d017d838d","Type":"ContainerStarted","Data":"c6887ae1f93aec0d07d00ff51cf7a1b2b8059f436065d03b9bea93ac8509208b"} Jan 23 14:34:17 crc kubenswrapper[4775]: I0123 14:34:17.697034 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"9cdb17e1-4872-47c6-a39d-eac9257959bf","Type":"ContainerStarted","Data":"6318732f39157d8f833daeabb446b0f5b265420e6e5607406e6542efdecb189c"} Jan 23 14:34:17 crc kubenswrapper[4775]: I0123 14:34:17.697112 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"9cdb17e1-4872-47c6-a39d-eac9257959bf","Type":"ContainerStarted","Data":"72284d5a64aac0d3ed9862d805136b464b664d4d4255f42e29391ce52dc5c6db"} Jan 23 14:34:17 crc kubenswrapper[4775]: I0123 14:34:17.701381 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"e661567d-01ce-42ba-8257-c8a031e45a0f","Type":"ContainerStarted","Data":"0ad968b4ba3d3de850ad82e61b509fe22db14cd818dbc2c1bed979d9cea5791e"} Jan 23 14:34:17 crc kubenswrapper[4775]: I0123 14:34:17.701423 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"e661567d-01ce-42ba-8257-c8a031e45a0f","Type":"ContainerStarted","Data":"a78d09621585651729bf825f6c0c7c87c9c74cae0df8c0c5b2cea147df0c160b"} Jan 23 14:34:17 crc kubenswrapper[4775]: I0123 14:34:17.719575 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=2.719560983 podStartE2EDuration="2.719560983s" podCreationTimestamp="2026-01-23 14:34:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:34:17.717730991 +0000 UTC m=+1804.712559731" watchObservedRunningTime="2026-01-23 14:34:17.719560983 +0000 UTC m=+1804.714389723" Jan 23 14:34:18 crc kubenswrapper[4775]: I0123 14:34:18.716558 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"e661567d-01ce-42ba-8257-c8a031e45a0f","Type":"ContainerStarted","Data":"4d0b4a252f93936495c3a9a585fd700fd1b77459332bc2ab2912bf962d0c63a8"} Jan 23 14:34:18 crc kubenswrapper[4775]: I0123 14:34:18.718753 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"55b13561-f097-4a68-bc50-482d017d838d","Type":"ContainerStarted","Data":"eaec2cfe6653a3e7b4aa21077d223c9ff8028cc0a745039d700d8549a34e22e4"} Jan 23 14:34:18 crc kubenswrapper[4775]: I0123 14:34:18.721928 4775 generic.go:334] "Generic (PLEG): container finished" podID="ec6263e3-855a-48e5-ae77-25462d7e5a13" containerID="ba9aa3a2fb38d7f28f8fd65dca65cb5079144b881eab4e42302934720de2c14c" exitCode=255 Jan 23 14:34:18 crc kubenswrapper[4775]: I0123 14:34:18.721958 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df" event={"ID":"ec6263e3-855a-48e5-ae77-25462d7e5a13","Type":"ContainerDied","Data":"ba9aa3a2fb38d7f28f8fd65dca65cb5079144b881eab4e42302934720de2c14c"} Jan 23 14:34:18 crc kubenswrapper[4775]: I0123 14:34:18.722433 4775 scope.go:117] "RemoveContainer" containerID="ba9aa3a2fb38d7f28f8fd65dca65cb5079144b881eab4e42302934720de2c14c" Jan 23 14:34:18 crc kubenswrapper[4775]: I0123 14:34:18.752443 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=3.752415596 podStartE2EDuration="3.752415596s" podCreationTimestamp="2026-01-23 14:34:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:34:18.735098197 +0000 UTC m=+1805.729926937" watchObservedRunningTime="2026-01-23 14:34:18.752415596 +0000 UTC m=+1805.747244366" Jan 23 14:34:18 crc kubenswrapper[4775]: I0123 14:34:18.769722 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=3.769699154 podStartE2EDuration="3.769699154s" podCreationTimestamp="2026-01-23 14:34:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:34:18.755945576 +0000 UTC m=+1805.750774366" watchObservedRunningTime="2026-01-23 14:34:18.769699154 +0000 UTC m=+1805.764527914" Jan 23 14:34:19 crc kubenswrapper[4775]: I0123 14:34:19.033375 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-6qmk5"] Jan 23 14:34:19 crc kubenswrapper[4775]: I0123 14:34:19.043995 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/keystone-bootstrap-6qmk5"] Jan 23 14:34:19 crc kubenswrapper[4775]: I0123 14:34:19.723883 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5498924-f821-48fa-88a0-6d8c0c7c01de" path="/var/lib/kubelet/pods/b5498924-f821-48fa-88a0-6d8c0c7c01de/volumes" Jan 23 14:34:19 crc kubenswrapper[4775]: I0123 14:34:19.736640 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df" event={"ID":"ec6263e3-855a-48e5-ae77-25462d7e5a13","Type":"ContainerStarted","Data":"3951b61bf0f5fd68e8a231037d3c4c31e8105e9a338b029e1bef1e8babd9023f"} Jan 23 14:34:20 crc kubenswrapper[4775]: I0123 14:34:20.748968 4775 generic.go:334] "Generic (PLEG): container finished" podID="db75fd7c-ba91-4090-ac20-0009c06598f3" containerID="ed23d1d8c2e578153c70d817dfeffe62e4af30e952a97680b7c773eb23fb2ca1" exitCode=0 Jan 23 14:34:20 crc kubenswrapper[4775]: I0123 14:34:20.749385 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2" event={"ID":"db75fd7c-ba91-4090-ac20-0009c06598f3","Type":"ContainerDied","Data":"ed23d1d8c2e578153c70d817dfeffe62e4af30e952a97680b7c773eb23fb2ca1"} Jan 23 14:34:21 crc kubenswrapper[4775]: I0123 14:34:21.167718 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:21 crc kubenswrapper[4775]: I0123 14:34:21.245276 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:21 crc kubenswrapper[4775]: I0123 14:34:21.245352 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:21 crc kubenswrapper[4775]: I0123 14:34:21.764138 4775 generic.go:334] "Generic (PLEG): container finished" podID="ec6263e3-855a-48e5-ae77-25462d7e5a13" containerID="3951b61bf0f5fd68e8a231037d3c4c31e8105e9a338b029e1bef1e8babd9023f" exitCode=0 Jan 23 14:34:21 crc kubenswrapper[4775]: I0123 14:34:21.764261 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df" event={"ID":"ec6263e3-855a-48e5-ae77-25462d7e5a13","Type":"ContainerDied","Data":"3951b61bf0f5fd68e8a231037d3c4c31e8105e9a338b029e1bef1e8babd9023f"} Jan 23 14:34:21 crc kubenswrapper[4775]: I0123 14:34:21.764345 4775 scope.go:117] "RemoveContainer" containerID="ba9aa3a2fb38d7f28f8fd65dca65cb5079144b881eab4e42302934720de2c14c" Jan 23 14:34:21 crc kubenswrapper[4775]: I0123 14:34:21.774581 4775 generic.go:334] "Generic (PLEG): container finished" podID="71a6469b-2bd1-4004-9a3d-c9d87161efab" containerID="8338a669e0d43937d5f843231e5fbbed5ec502884f9ba96c38e08d3114af925f" exitCode=0 Jan 23 14:34:21 crc kubenswrapper[4775]: I0123 14:34:21.774641 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25" event={"ID":"71a6469b-2bd1-4004-9a3d-c9d87161efab","Type":"ContainerDied","Data":"8338a669e0d43937d5f843231e5fbbed5ec502884f9ba96c38e08d3114af925f"} Jan 23 14:34:22 crc kubenswrapper[4775]: I0123 14:34:22.184642 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2" Jan 23 14:34:22 crc kubenswrapper[4775]: I0123 14:34:22.336318 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db75fd7c-ba91-4090-ac20-0009c06598f3-config-data\") pod \"db75fd7c-ba91-4090-ac20-0009c06598f3\" (UID: \"db75fd7c-ba91-4090-ac20-0009c06598f3\") " Jan 23 14:34:22 crc kubenswrapper[4775]: I0123 14:34:22.336410 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db75fd7c-ba91-4090-ac20-0009c06598f3-scripts\") pod \"db75fd7c-ba91-4090-ac20-0009c06598f3\" (UID: \"db75fd7c-ba91-4090-ac20-0009c06598f3\") " Jan 23 14:34:22 crc kubenswrapper[4775]: I0123 14:34:22.336439 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdkdk\" (UniqueName: \"kubernetes.io/projected/db75fd7c-ba91-4090-ac20-0009c06598f3-kube-api-access-wdkdk\") pod \"db75fd7c-ba91-4090-ac20-0009c06598f3\" (UID: \"db75fd7c-ba91-4090-ac20-0009c06598f3\") " Jan 23 14:34:22 crc kubenswrapper[4775]: I0123 14:34:22.343400 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db75fd7c-ba91-4090-ac20-0009c06598f3-scripts" (OuterVolumeSpecName: "scripts") pod "db75fd7c-ba91-4090-ac20-0009c06598f3" (UID: "db75fd7c-ba91-4090-ac20-0009c06598f3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:34:22 crc kubenswrapper[4775]: I0123 14:34:22.353993 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db75fd7c-ba91-4090-ac20-0009c06598f3-kube-api-access-wdkdk" (OuterVolumeSpecName: "kube-api-access-wdkdk") pod "db75fd7c-ba91-4090-ac20-0009c06598f3" (UID: "db75fd7c-ba91-4090-ac20-0009c06598f3"). InnerVolumeSpecName "kube-api-access-wdkdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:34:22 crc kubenswrapper[4775]: I0123 14:34:22.381769 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db75fd7c-ba91-4090-ac20-0009c06598f3-config-data" (OuterVolumeSpecName: "config-data") pod "db75fd7c-ba91-4090-ac20-0009c06598f3" (UID: "db75fd7c-ba91-4090-ac20-0009c06598f3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:34:22 crc kubenswrapper[4775]: I0123 14:34:22.438117 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db75fd7c-ba91-4090-ac20-0009c06598f3-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:22 crc kubenswrapper[4775]: I0123 14:34:22.438151 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db75fd7c-ba91-4090-ac20-0009c06598f3-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:22 crc kubenswrapper[4775]: I0123 14:34:22.438160 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdkdk\" (UniqueName: \"kubernetes.io/projected/db75fd7c-ba91-4090-ac20-0009c06598f3-kube-api-access-wdkdk\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:22 crc kubenswrapper[4775]: I0123 14:34:22.787161 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2" event={"ID":"db75fd7c-ba91-4090-ac20-0009c06598f3","Type":"ContainerDied","Data":"0e103c6efda37b7d35d8959b0e0cb3caa3e02a7d8cc645a289fbdc77aaad85e9"} Jan 23 14:34:22 crc kubenswrapper[4775]: I0123 14:34:22.787205 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2" Jan 23 14:34:22 crc kubenswrapper[4775]: I0123 14:34:22.787227 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e103c6efda37b7d35d8959b0e0cb3caa3e02a7d8cc645a289fbdc77aaad85e9" Jan 23 14:34:22 crc kubenswrapper[4775]: I0123 14:34:22.990185 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:34:22 crc kubenswrapper[4775]: I0123 14:34:22.990726 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="55b13561-f097-4a68-bc50-482d017d838d" containerName="nova-kuttl-api-log" containerID="cri-o://5c3d0956333cbd83fa5ca67cf1ad79878bf44e3d1a8725af4b0545d4d6530237" gracePeriod=30 Jan 23 14:34:22 crc kubenswrapper[4775]: I0123 14:34:22.990878 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="55b13561-f097-4a68-bc50-482d017d838d" containerName="nova-kuttl-api-api" containerID="cri-o://eaec2cfe6653a3e7b4aa21077d223c9ff8028cc0a745039d700d8549a34e22e4" gracePeriod=30 Jan 23 14:34:23 crc kubenswrapper[4775]: I0123 14:34:23.020953 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:34:23 crc kubenswrapper[4775]: I0123 14:34:23.021122 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="9cdb17e1-4872-47c6-a39d-eac9257959bf" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://6318732f39157d8f833daeabb446b0f5b265420e6e5607406e6542efdecb189c" gracePeriod=30 Jan 23 14:34:23 crc kubenswrapper[4775]: I0123 14:34:23.059288 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:34:23 crc kubenswrapper[4775]: I0123 14:34:23.059474 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="e661567d-01ce-42ba-8257-c8a031e45a0f" containerName="nova-kuttl-metadata-log" containerID="cri-o://0ad968b4ba3d3de850ad82e61b509fe22db14cd818dbc2c1bed979d9cea5791e" gracePeriod=30 Jan 23 14:34:23 crc kubenswrapper[4775]: I0123 14:34:23.059843 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="e661567d-01ce-42ba-8257-c8a031e45a0f" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://4d0b4a252f93936495c3a9a585fd700fd1b77459332bc2ab2912bf962d0c63a8" gracePeriod=30 Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.377831 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.382855 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.558820 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71a6469b-2bd1-4004-9a3d-c9d87161efab-config-data\") pod \"71a6469b-2bd1-4004-9a3d-c9d87161efab\" (UID: \"71a6469b-2bd1-4004-9a3d-c9d87161efab\") " Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.558941 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4r6jr\" (UniqueName: \"kubernetes.io/projected/ec6263e3-855a-48e5-ae77-25462d7e5a13-kube-api-access-4r6jr\") pod \"ec6263e3-855a-48e5-ae77-25462d7e5a13\" (UID: \"ec6263e3-855a-48e5-ae77-25462d7e5a13\") " Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.558999 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71a6469b-2bd1-4004-9a3d-c9d87161efab-scripts\") pod \"71a6469b-2bd1-4004-9a3d-c9d87161efab\" (UID: \"71a6469b-2bd1-4004-9a3d-c9d87161efab\") " Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.559025 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec6263e3-855a-48e5-ae77-25462d7e5a13-scripts\") pod \"ec6263e3-855a-48e5-ae77-25462d7e5a13\" (UID: \"ec6263e3-855a-48e5-ae77-25462d7e5a13\") " Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.559054 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snmpp\" (UniqueName: \"kubernetes.io/projected/71a6469b-2bd1-4004-9a3d-c9d87161efab-kube-api-access-snmpp\") pod \"71a6469b-2bd1-4004-9a3d-c9d87161efab\" (UID: \"71a6469b-2bd1-4004-9a3d-c9d87161efab\") " Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.559175 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec6263e3-855a-48e5-ae77-25462d7e5a13-config-data\") pod \"ec6263e3-855a-48e5-ae77-25462d7e5a13\" (UID: \"ec6263e3-855a-48e5-ae77-25462d7e5a13\") " Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.564293 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71a6469b-2bd1-4004-9a3d-c9d87161efab-kube-api-access-snmpp" (OuterVolumeSpecName: "kube-api-access-snmpp") pod "71a6469b-2bd1-4004-9a3d-c9d87161efab" (UID: "71a6469b-2bd1-4004-9a3d-c9d87161efab"). InnerVolumeSpecName "kube-api-access-snmpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.564321 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71a6469b-2bd1-4004-9a3d-c9d87161efab-scripts" (OuterVolumeSpecName: "scripts") pod "71a6469b-2bd1-4004-9a3d-c9d87161efab" (UID: "71a6469b-2bd1-4004-9a3d-c9d87161efab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.564978 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec6263e3-855a-48e5-ae77-25462d7e5a13-scripts" (OuterVolumeSpecName: "scripts") pod "ec6263e3-855a-48e5-ae77-25462d7e5a13" (UID: "ec6263e3-855a-48e5-ae77-25462d7e5a13"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.565435 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec6263e3-855a-48e5-ae77-25462d7e5a13-kube-api-access-4r6jr" (OuterVolumeSpecName: "kube-api-access-4r6jr") pod "ec6263e3-855a-48e5-ae77-25462d7e5a13" (UID: "ec6263e3-855a-48e5-ae77-25462d7e5a13"). InnerVolumeSpecName "kube-api-access-4r6jr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.582707 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec6263e3-855a-48e5-ae77-25462d7e5a13-config-data" (OuterVolumeSpecName: "config-data") pod "ec6263e3-855a-48e5-ae77-25462d7e5a13" (UID: "ec6263e3-855a-48e5-ae77-25462d7e5a13"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.586457 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71a6469b-2bd1-4004-9a3d-c9d87161efab-config-data" (OuterVolumeSpecName: "config-data") pod "71a6469b-2bd1-4004-9a3d-c9d87161efab" (UID: "71a6469b-2bd1-4004-9a3d-c9d87161efab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.660623 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec6263e3-855a-48e5-ae77-25462d7e5a13-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.660664 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71a6469b-2bd1-4004-9a3d-c9d87161efab-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.660678 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4r6jr\" (UniqueName: \"kubernetes.io/projected/ec6263e3-855a-48e5-ae77-25462d7e5a13-kube-api-access-4r6jr\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.660692 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71a6469b-2bd1-4004-9a3d-c9d87161efab-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.660705 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec6263e3-855a-48e5-ae77-25462d7e5a13-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.660717 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snmpp\" (UniqueName: \"kubernetes.io/projected/71a6469b-2bd1-4004-9a3d-c9d87161efab-kube-api-access-snmpp\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.810589 4775 generic.go:334] "Generic (PLEG): container finished" podID="55b13561-f097-4a68-bc50-482d017d838d" containerID="eaec2cfe6653a3e7b4aa21077d223c9ff8028cc0a745039d700d8549a34e22e4" exitCode=0 Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.810634 4775 generic.go:334] "Generic (PLEG): container finished" podID="55b13561-f097-4a68-bc50-482d017d838d" containerID="5c3d0956333cbd83fa5ca67cf1ad79878bf44e3d1a8725af4b0545d4d6530237" exitCode=143 Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.810714 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"55b13561-f097-4a68-bc50-482d017d838d","Type":"ContainerDied","Data":"eaec2cfe6653a3e7b4aa21077d223c9ff8028cc0a745039d700d8549a34e22e4"} Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.810783 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"55b13561-f097-4a68-bc50-482d017d838d","Type":"ContainerDied","Data":"5c3d0956333cbd83fa5ca67cf1ad79878bf44e3d1a8725af4b0545d4d6530237"} Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.812910 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.813097 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df" event={"ID":"ec6263e3-855a-48e5-ae77-25462d7e5a13","Type":"ContainerDied","Data":"77615832a6adcb83e24cbd6ebcba1287a3cc2749704e0310ddcb67c4e48edab3"} Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.813145 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77615832a6adcb83e24cbd6ebcba1287a3cc2749704e0310ddcb67c4e48edab3" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.819101 4775 generic.go:334] "Generic (PLEG): container finished" podID="e661567d-01ce-42ba-8257-c8a031e45a0f" containerID="4d0b4a252f93936495c3a9a585fd700fd1b77459332bc2ab2912bf962d0c63a8" exitCode=0 Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.819137 4775 generic.go:334] "Generic (PLEG): container finished" podID="e661567d-01ce-42ba-8257-c8a031e45a0f" containerID="0ad968b4ba3d3de850ad82e61b509fe22db14cd818dbc2c1bed979d9cea5791e" exitCode=143 Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.819236 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"e661567d-01ce-42ba-8257-c8a031e45a0f","Type":"ContainerDied","Data":"4d0b4a252f93936495c3a9a585fd700fd1b77459332bc2ab2912bf962d0c63a8"} Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.819913 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"e661567d-01ce-42ba-8257-c8a031e45a0f","Type":"ContainerDied","Data":"0ad968b4ba3d3de850ad82e61b509fe22db14cd818dbc2c1bed979d9cea5791e"} Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.832068 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25" event={"ID":"71a6469b-2bd1-4004-9a3d-c9d87161efab","Type":"ContainerDied","Data":"156abf729125e825e482677cd02117e6955b3cd43618d229ad3b86794d80e8f0"} Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.832105 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="156abf729125e825e482677cd02117e6955b3cd43618d229ad3b86794d80e8f0" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:23.832320 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.469815 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.476724 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.577327 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e661567d-01ce-42ba-8257-c8a031e45a0f-config-data\") pod \"e661567d-01ce-42ba-8257-c8a031e45a0f\" (UID: \"e661567d-01ce-42ba-8257-c8a031e45a0f\") " Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.577857 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55b13561-f097-4a68-bc50-482d017d838d-config-data\") pod \"55b13561-f097-4a68-bc50-482d017d838d\" (UID: \"55b13561-f097-4a68-bc50-482d017d838d\") " Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.577894 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e661567d-01ce-42ba-8257-c8a031e45a0f-logs\") pod \"e661567d-01ce-42ba-8257-c8a031e45a0f\" (UID: \"e661567d-01ce-42ba-8257-c8a031e45a0f\") " Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.577927 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55b13561-f097-4a68-bc50-482d017d838d-logs\") pod \"55b13561-f097-4a68-bc50-482d017d838d\" (UID: \"55b13561-f097-4a68-bc50-482d017d838d\") " Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.577953 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vfls\" (UniqueName: \"kubernetes.io/projected/e661567d-01ce-42ba-8257-c8a031e45a0f-kube-api-access-9vfls\") pod \"e661567d-01ce-42ba-8257-c8a031e45a0f\" (UID: \"e661567d-01ce-42ba-8257-c8a031e45a0f\") " Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.578010 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqghj\" (UniqueName: \"kubernetes.io/projected/55b13561-f097-4a68-bc50-482d017d838d-kube-api-access-qqghj\") pod \"55b13561-f097-4a68-bc50-482d017d838d\" (UID: \"55b13561-f097-4a68-bc50-482d017d838d\") " Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.578898 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55b13561-f097-4a68-bc50-482d017d838d-logs" (OuterVolumeSpecName: "logs") pod "55b13561-f097-4a68-bc50-482d017d838d" (UID: "55b13561-f097-4a68-bc50-482d017d838d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.579030 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e661567d-01ce-42ba-8257-c8a031e45a0f-logs" (OuterVolumeSpecName: "logs") pod "e661567d-01ce-42ba-8257-c8a031e45a0f" (UID: "e661567d-01ce-42ba-8257-c8a031e45a0f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.579311 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e661567d-01ce-42ba-8257-c8a031e45a0f-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.579340 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55b13561-f097-4a68-bc50-482d017d838d-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.584952 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55b13561-f097-4a68-bc50-482d017d838d-kube-api-access-qqghj" (OuterVolumeSpecName: "kube-api-access-qqghj") pod "55b13561-f097-4a68-bc50-482d017d838d" (UID: "55b13561-f097-4a68-bc50-482d017d838d"). InnerVolumeSpecName "kube-api-access-qqghj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.598644 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e661567d-01ce-42ba-8257-c8a031e45a0f-kube-api-access-9vfls" (OuterVolumeSpecName: "kube-api-access-9vfls") pod "e661567d-01ce-42ba-8257-c8a031e45a0f" (UID: "e661567d-01ce-42ba-8257-c8a031e45a0f"). InnerVolumeSpecName "kube-api-access-9vfls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.604259 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55b13561-f097-4a68-bc50-482d017d838d-config-data" (OuterVolumeSpecName: "config-data") pod "55b13561-f097-4a68-bc50-482d017d838d" (UID: "55b13561-f097-4a68-bc50-482d017d838d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.609009 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e661567d-01ce-42ba-8257-c8a031e45a0f-config-data" (OuterVolumeSpecName: "config-data") pod "e661567d-01ce-42ba-8257-c8a031e45a0f" (UID: "e661567d-01ce-42ba-8257-c8a031e45a0f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.681125 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e661567d-01ce-42ba-8257-c8a031e45a0f-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.681183 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55b13561-f097-4a68-bc50-482d017d838d-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.681195 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vfls\" (UniqueName: \"kubernetes.io/projected/e661567d-01ce-42ba-8257-c8a031e45a0f-kube-api-access-9vfls\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.681213 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqghj\" (UniqueName: \"kubernetes.io/projected/55b13561-f097-4a68-bc50-482d017d838d-kube-api-access-qqghj\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.847063 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"e661567d-01ce-42ba-8257-c8a031e45a0f","Type":"ContainerDied","Data":"a78d09621585651729bf825f6c0c7c87c9c74cae0df8c0c5b2cea147df0c160b"} Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.847108 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.847122 4775 scope.go:117] "RemoveContainer" containerID="4d0b4a252f93936495c3a9a585fd700fd1b77459332bc2ab2912bf962d0c63a8" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.850113 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.850128 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"55b13561-f097-4a68-bc50-482d017d838d","Type":"ContainerDied","Data":"c6887ae1f93aec0d07d00ff51cf7a1b2b8059f436065d03b9bea93ac8509208b"} Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.852875 4775 generic.go:334] "Generic (PLEG): container finished" podID="9cdb17e1-4872-47c6-a39d-eac9257959bf" containerID="6318732f39157d8f833daeabb446b0f5b265420e6e5607406e6542efdecb189c" exitCode=0 Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.852920 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"9cdb17e1-4872-47c6-a39d-eac9257959bf","Type":"ContainerDied","Data":"6318732f39157d8f833daeabb446b0f5b265420e6e5607406e6542efdecb189c"} Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.881502 4775 scope.go:117] "RemoveContainer" containerID="0ad968b4ba3d3de850ad82e61b509fe22db14cd818dbc2c1bed979d9cea5791e" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.887690 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.901747 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.918559 4775 scope.go:117] "RemoveContainer" containerID="eaec2cfe6653a3e7b4aa21077d223c9ff8028cc0a745039d700d8549a34e22e4" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.938864 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:34:24 crc kubenswrapper[4775]: E0123 14:34:24.939321 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55b13561-f097-4a68-bc50-482d017d838d" containerName="nova-kuttl-api-log" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.939379 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="55b13561-f097-4a68-bc50-482d017d838d" containerName="nova-kuttl-api-log" Jan 23 14:34:24 crc kubenswrapper[4775]: E0123 14:34:24.939449 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec6263e3-855a-48e5-ae77-25462d7e5a13" containerName="nova-manage" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.939495 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec6263e3-855a-48e5-ae77-25462d7e5a13" containerName="nova-manage" Jan 23 14:34:24 crc kubenswrapper[4775]: E0123 14:34:24.939558 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db75fd7c-ba91-4090-ac20-0009c06598f3" containerName="nova-manage" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.939615 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="db75fd7c-ba91-4090-ac20-0009c06598f3" containerName="nova-manage" Jan 23 14:34:24 crc kubenswrapper[4775]: E0123 14:34:24.939675 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e661567d-01ce-42ba-8257-c8a031e45a0f" containerName="nova-kuttl-metadata-metadata" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.939720 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="e661567d-01ce-42ba-8257-c8a031e45a0f" containerName="nova-kuttl-metadata-metadata" Jan 23 14:34:24 crc kubenswrapper[4775]: E0123 14:34:24.939766 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e661567d-01ce-42ba-8257-c8a031e45a0f" containerName="nova-kuttl-metadata-log" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.939834 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="e661567d-01ce-42ba-8257-c8a031e45a0f" containerName="nova-kuttl-metadata-log" Jan 23 14:34:24 crc kubenswrapper[4775]: E0123 14:34:24.939886 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71a6469b-2bd1-4004-9a3d-c9d87161efab" containerName="nova-manage" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.939930 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="71a6469b-2bd1-4004-9a3d-c9d87161efab" containerName="nova-manage" Jan 23 14:34:24 crc kubenswrapper[4775]: E0123 14:34:24.939978 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55b13561-f097-4a68-bc50-482d017d838d" containerName="nova-kuttl-api-api" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.940021 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="55b13561-f097-4a68-bc50-482d017d838d" containerName="nova-kuttl-api-api" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.940196 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec6263e3-855a-48e5-ae77-25462d7e5a13" containerName="nova-manage" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.940258 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="55b13561-f097-4a68-bc50-482d017d838d" containerName="nova-kuttl-api-api" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.940307 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec6263e3-855a-48e5-ae77-25462d7e5a13" containerName="nova-manage" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.940354 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="db75fd7c-ba91-4090-ac20-0009c06598f3" containerName="nova-manage" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.940398 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="e661567d-01ce-42ba-8257-c8a031e45a0f" containerName="nova-kuttl-metadata-metadata" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.940447 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="55b13561-f097-4a68-bc50-482d017d838d" containerName="nova-kuttl-api-log" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.940497 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="e661567d-01ce-42ba-8257-c8a031e45a0f" containerName="nova-kuttl-metadata-log" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.940549 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="71a6469b-2bd1-4004-9a3d-c9d87161efab" containerName="nova-manage" Jan 23 14:34:24 crc kubenswrapper[4775]: E0123 14:34:24.940759 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec6263e3-855a-48e5-ae77-25462d7e5a13" containerName="nova-manage" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.940830 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec6263e3-855a-48e5-ae77-25462d7e5a13" containerName="nova-manage" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.941539 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.945709 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.949054 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.958290 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.968447 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.977176 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.978203 4775 scope.go:117] "RemoveContainer" containerID="5c3d0956333cbd83fa5ca67cf1ad79878bf44e3d1a8725af4b0545d4d6530237" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.978701 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.987328 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 23 14:34:24 crc kubenswrapper[4775]: I0123 14:34:24.993422 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.064269 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.088773 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhnl9\" (UniqueName: \"kubernetes.io/projected/8a2eb109-bc5d-4ce5-af46-d5596b98b4e4-kube-api-access-lhnl9\") pod \"nova-kuttl-metadata-0\" (UID: \"8a2eb109-bc5d-4ce5-af46-d5596b98b4e4\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.089125 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a2eb109-bc5d-4ce5-af46-d5596b98b4e4-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"8a2eb109-bc5d-4ce5-af46-d5596b98b4e4\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.089154 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3b50d2b-6889-4e09-b328-ce213458f6e3-logs\") pod \"nova-kuttl-api-0\" (UID: \"e3b50d2b-6889-4e09-b328-ce213458f6e3\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.089174 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a2eb109-bc5d-4ce5-af46-d5596b98b4e4-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"8a2eb109-bc5d-4ce5-af46-d5596b98b4e4\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.089204 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49w8c\" (UniqueName: \"kubernetes.io/projected/e3b50d2b-6889-4e09-b328-ce213458f6e3-kube-api-access-49w8c\") pod \"nova-kuttl-api-0\" (UID: \"e3b50d2b-6889-4e09-b328-ce213458f6e3\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.089228 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3b50d2b-6889-4e09-b328-ce213458f6e3-config-data\") pod \"nova-kuttl-api-0\" (UID: \"e3b50d2b-6889-4e09-b328-ce213458f6e3\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.190485 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cdb17e1-4872-47c6-a39d-eac9257959bf-config-data\") pod \"9cdb17e1-4872-47c6-a39d-eac9257959bf\" (UID: \"9cdb17e1-4872-47c6-a39d-eac9257959bf\") " Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.190586 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnzdc\" (UniqueName: \"kubernetes.io/projected/9cdb17e1-4872-47c6-a39d-eac9257959bf-kube-api-access-tnzdc\") pod \"9cdb17e1-4872-47c6-a39d-eac9257959bf\" (UID: \"9cdb17e1-4872-47c6-a39d-eac9257959bf\") " Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.190956 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhnl9\" (UniqueName: \"kubernetes.io/projected/8a2eb109-bc5d-4ce5-af46-d5596b98b4e4-kube-api-access-lhnl9\") pod \"nova-kuttl-metadata-0\" (UID: \"8a2eb109-bc5d-4ce5-af46-d5596b98b4e4\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.191092 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a2eb109-bc5d-4ce5-af46-d5596b98b4e4-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"8a2eb109-bc5d-4ce5-af46-d5596b98b4e4\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.191144 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3b50d2b-6889-4e09-b328-ce213458f6e3-logs\") pod \"nova-kuttl-api-0\" (UID: \"e3b50d2b-6889-4e09-b328-ce213458f6e3\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.191187 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a2eb109-bc5d-4ce5-af46-d5596b98b4e4-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"8a2eb109-bc5d-4ce5-af46-d5596b98b4e4\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.191252 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49w8c\" (UniqueName: \"kubernetes.io/projected/e3b50d2b-6889-4e09-b328-ce213458f6e3-kube-api-access-49w8c\") pod \"nova-kuttl-api-0\" (UID: \"e3b50d2b-6889-4e09-b328-ce213458f6e3\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.191303 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3b50d2b-6889-4e09-b328-ce213458f6e3-config-data\") pod \"nova-kuttl-api-0\" (UID: \"e3b50d2b-6889-4e09-b328-ce213458f6e3\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.191987 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3b50d2b-6889-4e09-b328-ce213458f6e3-logs\") pod \"nova-kuttl-api-0\" (UID: \"e3b50d2b-6889-4e09-b328-ce213458f6e3\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.192267 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a2eb109-bc5d-4ce5-af46-d5596b98b4e4-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"8a2eb109-bc5d-4ce5-af46-d5596b98b4e4\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.196743 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cdb17e1-4872-47c6-a39d-eac9257959bf-kube-api-access-tnzdc" (OuterVolumeSpecName: "kube-api-access-tnzdc") pod "9cdb17e1-4872-47c6-a39d-eac9257959bf" (UID: "9cdb17e1-4872-47c6-a39d-eac9257959bf"). InnerVolumeSpecName "kube-api-access-tnzdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.197466 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a2eb109-bc5d-4ce5-af46-d5596b98b4e4-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"8a2eb109-bc5d-4ce5-af46-d5596b98b4e4\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.204913 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3b50d2b-6889-4e09-b328-ce213458f6e3-config-data\") pod \"nova-kuttl-api-0\" (UID: \"e3b50d2b-6889-4e09-b328-ce213458f6e3\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.210071 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhnl9\" (UniqueName: \"kubernetes.io/projected/8a2eb109-bc5d-4ce5-af46-d5596b98b4e4-kube-api-access-lhnl9\") pod \"nova-kuttl-metadata-0\" (UID: \"8a2eb109-bc5d-4ce5-af46-d5596b98b4e4\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.217184 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49w8c\" (UniqueName: \"kubernetes.io/projected/e3b50d2b-6889-4e09-b328-ce213458f6e3-kube-api-access-49w8c\") pod \"nova-kuttl-api-0\" (UID: \"e3b50d2b-6889-4e09-b328-ce213458f6e3\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.234423 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cdb17e1-4872-47c6-a39d-eac9257959bf-config-data" (OuterVolumeSpecName: "config-data") pod "9cdb17e1-4872-47c6-a39d-eac9257959bf" (UID: "9cdb17e1-4872-47c6-a39d-eac9257959bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.282458 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.293047 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cdb17e1-4872-47c6-a39d-eac9257959bf-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.293085 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnzdc\" (UniqueName: \"kubernetes.io/projected/9cdb17e1-4872-47c6-a39d-eac9257959bf-kube-api-access-tnzdc\") on node \"crc\" DevicePath \"\"" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.307874 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.729698 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55b13561-f097-4a68-bc50-482d017d838d" path="/var/lib/kubelet/pods/55b13561-f097-4a68-bc50-482d017d838d/volumes" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.731497 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e661567d-01ce-42ba-8257-c8a031e45a0f" path="/var/lib/kubelet/pods/e661567d-01ce-42ba-8257-c8a031e45a0f/volumes" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.816511 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.882996 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"9cdb17e1-4872-47c6-a39d-eac9257959bf","Type":"ContainerDied","Data":"72284d5a64aac0d3ed9862d805136b464b664d4d4255f42e29391ce52dc5c6db"} Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.883040 4775 scope.go:117] "RemoveContainer" containerID="6318732f39157d8f833daeabb446b0f5b265420e6e5607406e6542efdecb189c" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.883125 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.893029 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"8a2eb109-bc5d-4ce5-af46-d5596b98b4e4","Type":"ContainerStarted","Data":"3ef3e20b260f3e98c87c0a0151aead3f8b34244b446f78a1aa8e60eef7375188"} Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.902171 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.932086 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.966370 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.974389 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:34:25 crc kubenswrapper[4775]: E0123 14:34:25.974786 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cdb17e1-4872-47c6-a39d-eac9257959bf" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.974816 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cdb17e1-4872-47c6-a39d-eac9257959bf" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.974985 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cdb17e1-4872-47c6-a39d-eac9257959bf" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.977704 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.979533 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:34:25 crc kubenswrapper[4775]: I0123 14:34:25.980242 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 23 14:34:26 crc kubenswrapper[4775]: I0123 14:34:26.117582 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxqm7\" (UniqueName: \"kubernetes.io/projected/37d37972-46f4-48e0-a566-6984e8794cc4-kube-api-access-qxqm7\") pod \"nova-kuttl-scheduler-0\" (UID: \"37d37972-46f4-48e0-a566-6984e8794cc4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:26 crc kubenswrapper[4775]: I0123 14:34:26.117654 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37d37972-46f4-48e0-a566-6984e8794cc4-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"37d37972-46f4-48e0-a566-6984e8794cc4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:26 crc kubenswrapper[4775]: I0123 14:34:26.218973 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxqm7\" (UniqueName: \"kubernetes.io/projected/37d37972-46f4-48e0-a566-6984e8794cc4-kube-api-access-qxqm7\") pod \"nova-kuttl-scheduler-0\" (UID: \"37d37972-46f4-48e0-a566-6984e8794cc4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:26 crc kubenswrapper[4775]: I0123 14:34:26.219404 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37d37972-46f4-48e0-a566-6984e8794cc4-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"37d37972-46f4-48e0-a566-6984e8794cc4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:26 crc kubenswrapper[4775]: I0123 14:34:26.227358 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37d37972-46f4-48e0-a566-6984e8794cc4-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"37d37972-46f4-48e0-a566-6984e8794cc4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:26 crc kubenswrapper[4775]: I0123 14:34:26.239328 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxqm7\" (UniqueName: \"kubernetes.io/projected/37d37972-46f4-48e0-a566-6984e8794cc4-kube-api-access-qxqm7\") pod \"nova-kuttl-scheduler-0\" (UID: \"37d37972-46f4-48e0-a566-6984e8794cc4\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:26 crc kubenswrapper[4775]: I0123 14:34:26.293762 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:26 crc kubenswrapper[4775]: I0123 14:34:26.714467 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:34:26 crc kubenswrapper[4775]: I0123 14:34:26.863349 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:34:26 crc kubenswrapper[4775]: W0123 14:34:26.869315 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37d37972_46f4_48e0_a566_6984e8794cc4.slice/crio-eb2b04caf4109fd1b814ec31be59710165c03f6abaf4fa9c36d36f18bb0183bb WatchSource:0}: Error finding container eb2b04caf4109fd1b814ec31be59710165c03f6abaf4fa9c36d36f18bb0183bb: Status 404 returned error can't find the container with id eb2b04caf4109fd1b814ec31be59710165c03f6abaf4fa9c36d36f18bb0183bb Jan 23 14:34:26 crc kubenswrapper[4775]: I0123 14:34:26.913303 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"e3b50d2b-6889-4e09-b328-ce213458f6e3","Type":"ContainerStarted","Data":"cb5995d7a3a4ccb1bfe7a5d8b13bf7003f6838f5a23bd25715820c86d434289d"} Jan 23 14:34:26 crc kubenswrapper[4775]: I0123 14:34:26.914433 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"37d37972-46f4-48e0-a566-6984e8794cc4","Type":"ContainerStarted","Data":"eb2b04caf4109fd1b814ec31be59710165c03f6abaf4fa9c36d36f18bb0183bb"} Jan 23 14:34:27 crc kubenswrapper[4775]: I0123 14:34:27.726147 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cdb17e1-4872-47c6-a39d-eac9257959bf" path="/var/lib/kubelet/pods/9cdb17e1-4872-47c6-a39d-eac9257959bf/volumes" Jan 23 14:34:28 crc kubenswrapper[4775]: I0123 14:34:28.949354 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"37d37972-46f4-48e0-a566-6984e8794cc4","Type":"ContainerStarted","Data":"82bf1887cd673f75ab307d48be18708430db52ba7211d20dd5bc425df5bd2a3d"} Jan 23 14:34:28 crc kubenswrapper[4775]: I0123 14:34:28.956060 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"e3b50d2b-6889-4e09-b328-ce213458f6e3","Type":"ContainerStarted","Data":"9be00b1fe8658ccea45e4a9f713e00e608d129c69ec44babce1f2dbcdbd6fc58"} Jan 23 14:34:28 crc kubenswrapper[4775]: I0123 14:34:28.956107 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"e3b50d2b-6889-4e09-b328-ce213458f6e3","Type":"ContainerStarted","Data":"df6cb3ee7f998b99b2041c07b0bace529437fed1642279c84c0bd4ade8cac2be"} Jan 23 14:34:28 crc kubenswrapper[4775]: I0123 14:34:28.963548 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"8a2eb109-bc5d-4ce5-af46-d5596b98b4e4","Type":"ContainerStarted","Data":"097b2364f83440a3132b6cb79cdb472334da74927128439c671d4d99b0398fa9"} Jan 23 14:34:28 crc kubenswrapper[4775]: I0123 14:34:28.963584 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"8a2eb109-bc5d-4ce5-af46-d5596b98b4e4","Type":"ContainerStarted","Data":"75614d1831bbac5592105e5265508722336cc15ee6a181f2f54c134aec1aa13b"} Jan 23 14:34:28 crc kubenswrapper[4775]: I0123 14:34:28.968061 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerStarted","Data":"d3d96378db42c2ddc5100447e504efd5667272c1b57105f220bac9f07cfe29ce"} Jan 23 14:34:28 crc kubenswrapper[4775]: I0123 14:34:28.972891 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=3.972872524 podStartE2EDuration="3.972872524s" podCreationTimestamp="2026-01-23 14:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:34:28.969879304 +0000 UTC m=+1815.964708044" watchObservedRunningTime="2026-01-23 14:34:28.972872524 +0000 UTC m=+1815.967701254" Jan 23 14:34:29 crc kubenswrapper[4775]: I0123 14:34:29.040849 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=5.040828115 podStartE2EDuration="5.040828115s" podCreationTimestamp="2026-01-23 14:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:34:29.015238416 +0000 UTC m=+1816.010067156" watchObservedRunningTime="2026-01-23 14:34:29.040828115 +0000 UTC m=+1816.035656855" Jan 23 14:34:29 crc kubenswrapper[4775]: I0123 14:34:29.056258 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=5.05622997 podStartE2EDuration="5.05622997s" podCreationTimestamp="2026-01-23 14:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:34:29.039369856 +0000 UTC m=+1816.034198596" watchObservedRunningTime="2026-01-23 14:34:29.05622997 +0000 UTC m=+1816.051058750" Jan 23 14:34:30 crc kubenswrapper[4775]: I0123 14:34:30.308884 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:30 crc kubenswrapper[4775]: I0123 14:34:30.309269 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:31 crc kubenswrapper[4775]: I0123 14:34:31.294654 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:35 crc kubenswrapper[4775]: I0123 14:34:35.283637 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:35 crc kubenswrapper[4775]: I0123 14:34:35.284334 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:35 crc kubenswrapper[4775]: I0123 14:34:35.308590 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:35 crc kubenswrapper[4775]: I0123 14:34:35.308671 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:36 crc kubenswrapper[4775]: I0123 14:34:36.294421 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:36 crc kubenswrapper[4775]: I0123 14:34:36.326590 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:36 crc kubenswrapper[4775]: I0123 14:34:36.449016 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="8a2eb109-bc5d-4ce5-af46-d5596b98b4e4" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.198:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:34:36 crc kubenswrapper[4775]: I0123 14:34:36.449064 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="e3b50d2b-6889-4e09-b328-ce213458f6e3" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:34:36 crc kubenswrapper[4775]: I0123 14:34:36.449009 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="e3b50d2b-6889-4e09-b328-ce213458f6e3" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:34:36 crc kubenswrapper[4775]: I0123 14:34:36.449104 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="8a2eb109-bc5d-4ce5-af46-d5596b98b4e4" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.198:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:34:37 crc kubenswrapper[4775]: I0123 14:34:37.099379 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:34:45 crc kubenswrapper[4775]: I0123 14:34:45.287363 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:45 crc kubenswrapper[4775]: I0123 14:34:45.288219 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:45 crc kubenswrapper[4775]: I0123 14:34:45.289840 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:45 crc kubenswrapper[4775]: I0123 14:34:45.291058 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:45 crc kubenswrapper[4775]: I0123 14:34:45.318894 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:45 crc kubenswrapper[4775]: I0123 14:34:45.323254 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:45 crc kubenswrapper[4775]: I0123 14:34:45.327097 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:46 crc kubenswrapper[4775]: I0123 14:34:46.186271 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:46 crc kubenswrapper[4775]: I0123 14:34:46.189792 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:34:46 crc kubenswrapper[4775]: I0123 14:34:46.192268 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:34:46 crc kubenswrapper[4775]: I0123 14:34:46.656018 4775 scope.go:117] "RemoveContainer" containerID="b4c1b23769a70549b5013f743139c0324d53830c016cc7b8320ef98ddc16b647" Jan 23 14:34:46 crc kubenswrapper[4775]: I0123 14:34:46.688539 4775 scope.go:117] "RemoveContainer" containerID="29238591798a36dbd48ca4872cdddc49396b7b446c5f60340f5519ed8229bff3" Jan 23 14:34:46 crc kubenswrapper[4775]: I0123 14:34:46.738241 4775 scope.go:117] "RemoveContainer" containerID="4579a5ec0627d03f09f3dda4fc68f8fb4e44af53895a0e8c9b0a26eb695f55d2" Jan 23 14:34:46 crc kubenswrapper[4775]: I0123 14:34:46.772113 4775 scope.go:117] "RemoveContainer" containerID="03bac1f849c95644ae09fd2e62cba3da4e7525c38066ec2837085c381ddd303a" Jan 23 14:35:05 crc kubenswrapper[4775]: I0123 14:35:05.031488 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:35:05 crc kubenswrapper[4775]: I0123 14:35:05.037136 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podUID="7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa" containerName="nova-kuttl-cell0-conductor-conductor" containerID="cri-o://8ebbe7df337eed7eec1cd0d49f40ddb05c909061e66825a9f581a0ea754192e7" gracePeriod=30 Jan 23 14:35:05 crc kubenswrapper[4775]: I0123 14:35:05.044902 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 23 14:35:05 crc kubenswrapper[4775]: I0123 14:35:05.045409 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="a3bbc7d7-fc9d-490e-9610-55805e5e876c" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" containerID="cri-o://c947db4e331433b229677ab1076193fcb4125ba5042f6359b54fe32fa2db3874" gracePeriod=30 Jan 23 14:35:05 crc kubenswrapper[4775]: I0123 14:35:05.064443 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:35:05 crc kubenswrapper[4775]: I0123 14:35:05.064686 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="e3b50d2b-6889-4e09-b328-ce213458f6e3" containerName="nova-kuttl-api-log" containerID="cri-o://df6cb3ee7f998b99b2041c07b0bace529437fed1642279c84c0bd4ade8cac2be" gracePeriod=30 Jan 23 14:35:05 crc kubenswrapper[4775]: I0123 14:35:05.065118 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="e3b50d2b-6889-4e09-b328-ce213458f6e3" containerName="nova-kuttl-api-api" containerID="cri-o://9be00b1fe8658ccea45e4a9f713e00e608d129c69ec44babce1f2dbcdbd6fc58" gracePeriod=30 Jan 23 14:35:05 crc kubenswrapper[4775]: I0123 14:35:05.112986 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:35:05 crc kubenswrapper[4775]: I0123 14:35:05.113217 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="37d37972-46f4-48e0-a566-6984e8794cc4" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://82bf1887cd673f75ab307d48be18708430db52ba7211d20dd5bc425df5bd2a3d" gracePeriod=30 Jan 23 14:35:05 crc kubenswrapper[4775]: I0123 14:35:05.302727 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:35:05 crc kubenswrapper[4775]: I0123 14:35:05.302938 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podUID="6bcae715-33d1-4c44-9a33-f617c489dd8c" containerName="nova-kuttl-cell1-conductor-conductor" containerID="cri-o://00f657f92e0b5f8eeea6508bbfb05372a2ce4865e934064fa0ca5e2ac689ab30" gracePeriod=30 Jan 23 14:35:05 crc kubenswrapper[4775]: I0123 14:35:05.373630 4775 generic.go:334] "Generic (PLEG): container finished" podID="e3b50d2b-6889-4e09-b328-ce213458f6e3" containerID="df6cb3ee7f998b99b2041c07b0bace529437fed1642279c84c0bd4ade8cac2be" exitCode=143 Jan 23 14:35:05 crc kubenswrapper[4775]: I0123 14:35:05.373672 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"e3b50d2b-6889-4e09-b328-ce213458f6e3","Type":"ContainerDied","Data":"df6cb3ee7f998b99b2041c07b0bace529437fed1642279c84c0bd4ade8cac2be"} Jan 23 14:35:06 crc kubenswrapper[4775]: E0123 14:35:06.294766 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 82bf1887cd673f75ab307d48be18708430db52ba7211d20dd5bc425df5bd2a3d is running failed: container process not found" containerID="82bf1887cd673f75ab307d48be18708430db52ba7211d20dd5bc425df5bd2a3d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 14:35:06 crc kubenswrapper[4775]: E0123 14:35:06.295493 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 82bf1887cd673f75ab307d48be18708430db52ba7211d20dd5bc425df5bd2a3d is running failed: container process not found" containerID="82bf1887cd673f75ab307d48be18708430db52ba7211d20dd5bc425df5bd2a3d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 14:35:06 crc kubenswrapper[4775]: E0123 14:35:06.296039 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 82bf1887cd673f75ab307d48be18708430db52ba7211d20dd5bc425df5bd2a3d is running failed: container process not found" containerID="82bf1887cd673f75ab307d48be18708430db52ba7211d20dd5bc425df5bd2a3d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 14:35:06 crc kubenswrapper[4775]: E0123 14:35:06.296125 4775 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 82bf1887cd673f75ab307d48be18708430db52ba7211d20dd5bc425df5bd2a3d is running failed: container process not found" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="37d37972-46f4-48e0-a566-6984e8794cc4" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:35:06 crc kubenswrapper[4775]: I0123 14:35:06.385843 4775 generic.go:334] "Generic (PLEG): container finished" podID="37d37972-46f4-48e0-a566-6984e8794cc4" containerID="82bf1887cd673f75ab307d48be18708430db52ba7211d20dd5bc425df5bd2a3d" exitCode=0 Jan 23 14:35:06 crc kubenswrapper[4775]: I0123 14:35:06.385913 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"37d37972-46f4-48e0-a566-6984e8794cc4","Type":"ContainerDied","Data":"82bf1887cd673f75ab307d48be18708430db52ba7211d20dd5bc425df5bd2a3d"} Jan 23 14:35:06 crc kubenswrapper[4775]: I0123 14:35:06.577725 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:35:06 crc kubenswrapper[4775]: I0123 14:35:06.665144 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxqm7\" (UniqueName: \"kubernetes.io/projected/37d37972-46f4-48e0-a566-6984e8794cc4-kube-api-access-qxqm7\") pod \"37d37972-46f4-48e0-a566-6984e8794cc4\" (UID: \"37d37972-46f4-48e0-a566-6984e8794cc4\") " Jan 23 14:35:06 crc kubenswrapper[4775]: I0123 14:35:06.665280 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37d37972-46f4-48e0-a566-6984e8794cc4-config-data\") pod \"37d37972-46f4-48e0-a566-6984e8794cc4\" (UID: \"37d37972-46f4-48e0-a566-6984e8794cc4\") " Jan 23 14:35:06 crc kubenswrapper[4775]: I0123 14:35:06.676280 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37d37972-46f4-48e0-a566-6984e8794cc4-kube-api-access-qxqm7" (OuterVolumeSpecName: "kube-api-access-qxqm7") pod "37d37972-46f4-48e0-a566-6984e8794cc4" (UID: "37d37972-46f4-48e0-a566-6984e8794cc4"). InnerVolumeSpecName "kube-api-access-qxqm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:06 crc kubenswrapper[4775]: I0123 14:35:06.705018 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37d37972-46f4-48e0-a566-6984e8794cc4-config-data" (OuterVolumeSpecName: "config-data") pod "37d37972-46f4-48e0-a566-6984e8794cc4" (UID: "37d37972-46f4-48e0-a566-6984e8794cc4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:35:06 crc kubenswrapper[4775]: I0123 14:35:06.767167 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxqm7\" (UniqueName: \"kubernetes.io/projected/37d37972-46f4-48e0-a566-6984e8794cc4-kube-api-access-qxqm7\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:06 crc kubenswrapper[4775]: I0123 14:35:06.767208 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37d37972-46f4-48e0-a566-6984e8794cc4-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:07 crc kubenswrapper[4775]: I0123 14:35:07.400759 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"37d37972-46f4-48e0-a566-6984e8794cc4","Type":"ContainerDied","Data":"eb2b04caf4109fd1b814ec31be59710165c03f6abaf4fa9c36d36f18bb0183bb"} Jan 23 14:35:07 crc kubenswrapper[4775]: I0123 14:35:07.400855 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:35:07 crc kubenswrapper[4775]: I0123 14:35:07.400872 4775 scope.go:117] "RemoveContainer" containerID="82bf1887cd673f75ab307d48be18708430db52ba7211d20dd5bc425df5bd2a3d" Jan 23 14:35:07 crc kubenswrapper[4775]: I0123 14:35:07.453958 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:35:07 crc kubenswrapper[4775]: I0123 14:35:07.463483 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:35:07 crc kubenswrapper[4775]: I0123 14:35:07.485849 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:35:07 crc kubenswrapper[4775]: E0123 14:35:07.486201 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37d37972-46f4-48e0-a566-6984e8794cc4" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:35:07 crc kubenswrapper[4775]: I0123 14:35:07.486222 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="37d37972-46f4-48e0-a566-6984e8794cc4" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:35:07 crc kubenswrapper[4775]: I0123 14:35:07.486421 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="37d37972-46f4-48e0-a566-6984e8794cc4" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:35:07 crc kubenswrapper[4775]: I0123 14:35:07.487046 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:35:07 crc kubenswrapper[4775]: I0123 14:35:07.488878 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 23 14:35:07 crc kubenswrapper[4775]: I0123 14:35:07.496771 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:35:07 crc kubenswrapper[4775]: I0123 14:35:07.579839 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/053e93b4-4f28-478d-9065-20980afe9e20-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"053e93b4-4f28-478d-9065-20980afe9e20\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:35:07 crc kubenswrapper[4775]: I0123 14:35:07.580192 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvpt6\" (UniqueName: \"kubernetes.io/projected/053e93b4-4f28-478d-9065-20980afe9e20-kube-api-access-vvpt6\") pod \"nova-kuttl-scheduler-0\" (UID: \"053e93b4-4f28-478d-9065-20980afe9e20\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:35:07 crc kubenswrapper[4775]: I0123 14:35:07.681836 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvpt6\" (UniqueName: \"kubernetes.io/projected/053e93b4-4f28-478d-9065-20980afe9e20-kube-api-access-vvpt6\") pod \"nova-kuttl-scheduler-0\" (UID: \"053e93b4-4f28-478d-9065-20980afe9e20\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:35:07 crc kubenswrapper[4775]: I0123 14:35:07.681966 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/053e93b4-4f28-478d-9065-20980afe9e20-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"053e93b4-4f28-478d-9065-20980afe9e20\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:35:07 crc kubenswrapper[4775]: I0123 14:35:07.698970 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/053e93b4-4f28-478d-9065-20980afe9e20-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"053e93b4-4f28-478d-9065-20980afe9e20\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:35:07 crc kubenswrapper[4775]: I0123 14:35:07.721016 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvpt6\" (UniqueName: \"kubernetes.io/projected/053e93b4-4f28-478d-9065-20980afe9e20-kube-api-access-vvpt6\") pod \"nova-kuttl-scheduler-0\" (UID: \"053e93b4-4f28-478d-9065-20980afe9e20\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:35:07 crc kubenswrapper[4775]: I0123 14:35:07.738009 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37d37972-46f4-48e0-a566-6984e8794cc4" path="/var/lib/kubelet/pods/37d37972-46f4-48e0-a566-6984e8794cc4/volumes" Jan 23 14:35:07 crc kubenswrapper[4775]: I0123 14:35:07.841770 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.230444 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.292407 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hj9j\" (UniqueName: \"kubernetes.io/projected/6bcae715-33d1-4c44-9a33-f617c489dd8c-kube-api-access-7hj9j\") pod \"6bcae715-33d1-4c44-9a33-f617c489dd8c\" (UID: \"6bcae715-33d1-4c44-9a33-f617c489dd8c\") " Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.292551 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bcae715-33d1-4c44-9a33-f617c489dd8c-config-data\") pod \"6bcae715-33d1-4c44-9a33-f617c489dd8c\" (UID: \"6bcae715-33d1-4c44-9a33-f617c489dd8c\") " Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.297988 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bcae715-33d1-4c44-9a33-f617c489dd8c-kube-api-access-7hj9j" (OuterVolumeSpecName: "kube-api-access-7hj9j") pod "6bcae715-33d1-4c44-9a33-f617c489dd8c" (UID: "6bcae715-33d1-4c44-9a33-f617c489dd8c"). InnerVolumeSpecName "kube-api-access-7hj9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.330361 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6bcae715-33d1-4c44-9a33-f617c489dd8c-config-data" (OuterVolumeSpecName: "config-data") pod "6bcae715-33d1-4c44-9a33-f617c489dd8c" (UID: "6bcae715-33d1-4c44-9a33-f617c489dd8c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.361280 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.394681 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hj9j\" (UniqueName: \"kubernetes.io/projected/6bcae715-33d1-4c44-9a33-f617c489dd8c-kube-api-access-7hj9j\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.394730 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6bcae715-33d1-4c44-9a33-f617c489dd8c-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.412908 4775 generic.go:334] "Generic (PLEG): container finished" podID="e3b50d2b-6889-4e09-b328-ce213458f6e3" containerID="9be00b1fe8658ccea45e4a9f713e00e608d129c69ec44babce1f2dbcdbd6fc58" exitCode=0 Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.412988 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"e3b50d2b-6889-4e09-b328-ce213458f6e3","Type":"ContainerDied","Data":"9be00b1fe8658ccea45e4a9f713e00e608d129c69ec44babce1f2dbcdbd6fc58"} Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.415837 4775 generic.go:334] "Generic (PLEG): container finished" podID="7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa" containerID="8ebbe7df337eed7eec1cd0d49f40ddb05c909061e66825a9f581a0ea754192e7" exitCode=0 Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.415929 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa","Type":"ContainerDied","Data":"8ebbe7df337eed7eec1cd0d49f40ddb05c909061e66825a9f581a0ea754192e7"} Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.419360 4775 generic.go:334] "Generic (PLEG): container finished" podID="6bcae715-33d1-4c44-9a33-f617c489dd8c" containerID="00f657f92e0b5f8eeea6508bbfb05372a2ce4865e934064fa0ca5e2ac689ab30" exitCode=0 Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.419468 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"6bcae715-33d1-4c44-9a33-f617c489dd8c","Type":"ContainerDied","Data":"00f657f92e0b5f8eeea6508bbfb05372a2ce4865e934064fa0ca5e2ac689ab30"} Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.419549 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"6bcae715-33d1-4c44-9a33-f617c489dd8c","Type":"ContainerDied","Data":"993d5972eb5c6f4c100b944f0126ed4f2e54f4d9412dabbd89c853572013d71a"} Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.419626 4775 scope.go:117] "RemoveContainer" containerID="00f657f92e0b5f8eeea6508bbfb05372a2ce4865e934064fa0ca5e2ac689ab30" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.419829 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.445508 4775 scope.go:117] "RemoveContainer" containerID="00f657f92e0b5f8eeea6508bbfb05372a2ce4865e934064fa0ca5e2ac689ab30" Jan 23 14:35:08 crc kubenswrapper[4775]: E0123 14:35:08.457448 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00f657f92e0b5f8eeea6508bbfb05372a2ce4865e934064fa0ca5e2ac689ab30\": container with ID starting with 00f657f92e0b5f8eeea6508bbfb05372a2ce4865e934064fa0ca5e2ac689ab30 not found: ID does not exist" containerID="00f657f92e0b5f8eeea6508bbfb05372a2ce4865e934064fa0ca5e2ac689ab30" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.457528 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00f657f92e0b5f8eeea6508bbfb05372a2ce4865e934064fa0ca5e2ac689ab30"} err="failed to get container status \"00f657f92e0b5f8eeea6508bbfb05372a2ce4865e934064fa0ca5e2ac689ab30\": rpc error: code = NotFound desc = could not find container \"00f657f92e0b5f8eeea6508bbfb05372a2ce4865e934064fa0ca5e2ac689ab30\": container with ID starting with 00f657f92e0b5f8eeea6508bbfb05372a2ce4865e934064fa0ca5e2ac689ab30 not found: ID does not exist" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.501891 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.518123 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.524281 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:35:08 crc kubenswrapper[4775]: E0123 14:35:08.524629 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bcae715-33d1-4c44-9a33-f617c489dd8c" containerName="nova-kuttl-cell1-conductor-conductor" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.524640 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bcae715-33d1-4c44-9a33-f617c489dd8c" containerName="nova-kuttl-cell1-conductor-conductor" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.524888 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bcae715-33d1-4c44-9a33-f617c489dd8c" containerName="nova-kuttl-cell1-conductor-conductor" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.525425 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.527235 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.529300 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.597344 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dc76b90-669a-4df4-a976-1199443a8f55-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"8dc76b90-669a-4df4-a976-1199443a8f55\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.597506 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkqlt\" (UniqueName: \"kubernetes.io/projected/8dc76b90-669a-4df4-a976-1199443a8f55-kube-api-access-lkqlt\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"8dc76b90-669a-4df4-a976-1199443a8f55\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.623002 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.698415 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqtbj\" (UniqueName: \"kubernetes.io/projected/7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa-kube-api-access-hqtbj\") pod \"7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa\" (UID: \"7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa\") " Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.698620 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa-config-data\") pod \"7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa\" (UID: \"7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa\") " Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.699235 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dc76b90-669a-4df4-a976-1199443a8f55-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"8dc76b90-669a-4df4-a976-1199443a8f55\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.699452 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkqlt\" (UniqueName: \"kubernetes.io/projected/8dc76b90-669a-4df4-a976-1199443a8f55-kube-api-access-lkqlt\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"8dc76b90-669a-4df4-a976-1199443a8f55\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.703363 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa-kube-api-access-hqtbj" (OuterVolumeSpecName: "kube-api-access-hqtbj") pod "7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa" (UID: "7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa"). InnerVolumeSpecName "kube-api-access-hqtbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.704681 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dc76b90-669a-4df4-a976-1199443a8f55-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"8dc76b90-669a-4df4-a976-1199443a8f55\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.724557 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkqlt\" (UniqueName: \"kubernetes.io/projected/8dc76b90-669a-4df4-a976-1199443a8f55-kube-api-access-lkqlt\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"8dc76b90-669a-4df4-a976-1199443a8f55\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.726974 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa-config-data" (OuterVolumeSpecName: "config-data") pod "7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa" (UID: "7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.800036 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqtbj\" (UniqueName: \"kubernetes.io/projected/7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa-kube-api-access-hqtbj\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.800150 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.816427 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.865371 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.900821 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49w8c\" (UniqueName: \"kubernetes.io/projected/e3b50d2b-6889-4e09-b328-ce213458f6e3-kube-api-access-49w8c\") pod \"e3b50d2b-6889-4e09-b328-ce213458f6e3\" (UID: \"e3b50d2b-6889-4e09-b328-ce213458f6e3\") " Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.900948 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3b50d2b-6889-4e09-b328-ce213458f6e3-config-data\") pod \"e3b50d2b-6889-4e09-b328-ce213458f6e3\" (UID: \"e3b50d2b-6889-4e09-b328-ce213458f6e3\") " Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.901011 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3b50d2b-6889-4e09-b328-ce213458f6e3-logs\") pod \"e3b50d2b-6889-4e09-b328-ce213458f6e3\" (UID: \"e3b50d2b-6889-4e09-b328-ce213458f6e3\") " Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.901860 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3b50d2b-6889-4e09-b328-ce213458f6e3-logs" (OuterVolumeSpecName: "logs") pod "e3b50d2b-6889-4e09-b328-ce213458f6e3" (UID: "e3b50d2b-6889-4e09-b328-ce213458f6e3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.905953 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3b50d2b-6889-4e09-b328-ce213458f6e3-kube-api-access-49w8c" (OuterVolumeSpecName: "kube-api-access-49w8c") pod "e3b50d2b-6889-4e09-b328-ce213458f6e3" (UID: "e3b50d2b-6889-4e09-b328-ce213458f6e3"). InnerVolumeSpecName "kube-api-access-49w8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:08 crc kubenswrapper[4775]: I0123 14:35:08.932347 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3b50d2b-6889-4e09-b328-ce213458f6e3-config-data" (OuterVolumeSpecName: "config-data") pod "e3b50d2b-6889-4e09-b328-ce213458f6e3" (UID: "e3b50d2b-6889-4e09-b328-ce213458f6e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.002575 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49w8c\" (UniqueName: \"kubernetes.io/projected/e3b50d2b-6889-4e09-b328-ce213458f6e3-kube-api-access-49w8c\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.002622 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3b50d2b-6889-4e09-b328-ce213458f6e3-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.002636 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3b50d2b-6889-4e09-b328-ce213458f6e3-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:09 crc kubenswrapper[4775]: W0123 14:35:09.316707 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8dc76b90_669a_4df4_a976_1199443a8f55.slice/crio-84970fe316cbca495dccd6939de0eed1e17d5dc5945a7756f3a045d8dd58f52a WatchSource:0}: Error finding container 84970fe316cbca495dccd6939de0eed1e17d5dc5945a7756f3a045d8dd58f52a: Status 404 returned error can't find the container with id 84970fe316cbca495dccd6939de0eed1e17d5dc5945a7756f3a045d8dd58f52a Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.316986 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.436475 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"8dc76b90-669a-4df4-a976-1199443a8f55","Type":"ContainerStarted","Data":"84970fe316cbca495dccd6939de0eed1e17d5dc5945a7756f3a045d8dd58f52a"} Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.441571 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"e3b50d2b-6889-4e09-b328-ce213458f6e3","Type":"ContainerDied","Data":"cb5995d7a3a4ccb1bfe7a5d8b13bf7003f6838f5a23bd25715820c86d434289d"} Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.441633 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.441654 4775 scope.go:117] "RemoveContainer" containerID="9be00b1fe8658ccea45e4a9f713e00e608d129c69ec44babce1f2dbcdbd6fc58" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.458478 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa","Type":"ContainerDied","Data":"862d714ec5d72fa2cecc76c787b92a298898df37e1f6457c744d6aed52ae7549"} Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.458601 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.462738 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"053e93b4-4f28-478d-9065-20980afe9e20","Type":"ContainerStarted","Data":"ab22bfbf9613e4952570f4b58b9dfa2a5876ac2a81bea5c917b73f18bda88cfe"} Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.462780 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"053e93b4-4f28-478d-9065-20980afe9e20","Type":"ContainerStarted","Data":"c6208b8557503ef028aa8573339ec1a013f7ed363a4379dec4e0efaa541f0f37"} Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.496410 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=2.496392059 podStartE2EDuration="2.496392059s" podCreationTimestamp="2026-01-23 14:35:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:35:09.490324206 +0000 UTC m=+1856.485152966" watchObservedRunningTime="2026-01-23 14:35:09.496392059 +0000 UTC m=+1856.491220799" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.512043 4775 scope.go:117] "RemoveContainer" containerID="df6cb3ee7f998b99b2041c07b0bace529437fed1642279c84c0bd4ade8cac2be" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.551480 4775 scope.go:117] "RemoveContainer" containerID="8ebbe7df337eed7eec1cd0d49f40ddb05c909061e66825a9f581a0ea754192e7" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.571050 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.577902 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.600144 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.607776 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:35:09 crc kubenswrapper[4775]: E0123 14:35:09.608161 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa" containerName="nova-kuttl-cell0-conductor-conductor" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.608180 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa" containerName="nova-kuttl-cell0-conductor-conductor" Jan 23 14:35:09 crc kubenswrapper[4775]: E0123 14:35:09.608192 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3b50d2b-6889-4e09-b328-ce213458f6e3" containerName="nova-kuttl-api-log" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.608199 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3b50d2b-6889-4e09-b328-ce213458f6e3" containerName="nova-kuttl-api-log" Jan 23 14:35:09 crc kubenswrapper[4775]: E0123 14:35:09.608206 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3b50d2b-6889-4e09-b328-ce213458f6e3" containerName="nova-kuttl-api-api" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.608213 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3b50d2b-6889-4e09-b328-ce213458f6e3" containerName="nova-kuttl-api-api" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.608370 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3b50d2b-6889-4e09-b328-ce213458f6e3" containerName="nova-kuttl-api-api" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.608380 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa" containerName="nova-kuttl-cell0-conductor-conductor" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.608389 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3b50d2b-6889-4e09-b328-ce213458f6e3" containerName="nova-kuttl-api-log" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.609226 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.612873 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.632633 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.638437 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.645753 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.646739 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.649210 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.649434 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.716734 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhrnh\" (UniqueName: \"kubernetes.io/projected/f1b9dee7-4afa-4bdc-88fc-f610d0bca84d-kube-api-access-mhrnh\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"f1b9dee7-4afa-4bdc-88fc-f610d0bca84d\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.717183 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93ee5e49-16f0-402a-9d8e-6f237110e663-logs\") pod \"nova-kuttl-api-0\" (UID: \"93ee5e49-16f0-402a-9d8e-6f237110e663\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.717226 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1b9dee7-4afa-4bdc-88fc-f610d0bca84d-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"f1b9dee7-4afa-4bdc-88fc-f610d0bca84d\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.717265 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93ee5e49-16f0-402a-9d8e-6f237110e663-config-data\") pod \"nova-kuttl-api-0\" (UID: \"93ee5e49-16f0-402a-9d8e-6f237110e663\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.717307 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js8lh\" (UniqueName: \"kubernetes.io/projected/93ee5e49-16f0-402a-9d8e-6f237110e663-kube-api-access-js8lh\") pod \"nova-kuttl-api-0\" (UID: \"93ee5e49-16f0-402a-9d8e-6f237110e663\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.725392 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bcae715-33d1-4c44-9a33-f617c489dd8c" path="/var/lib/kubelet/pods/6bcae715-33d1-4c44-9a33-f617c489dd8c/volumes" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.725961 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa" path="/var/lib/kubelet/pods/7b007ba6-d3ea-4b9d-b325-3ffabb38bdfa/volumes" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.726431 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3b50d2b-6889-4e09-b328-ce213458f6e3" path="/var/lib/kubelet/pods/e3b50d2b-6889-4e09-b328-ce213458f6e3/volumes" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.818191 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93ee5e49-16f0-402a-9d8e-6f237110e663-logs\") pod \"nova-kuttl-api-0\" (UID: \"93ee5e49-16f0-402a-9d8e-6f237110e663\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.818247 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1b9dee7-4afa-4bdc-88fc-f610d0bca84d-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"f1b9dee7-4afa-4bdc-88fc-f610d0bca84d\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.818286 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93ee5e49-16f0-402a-9d8e-6f237110e663-config-data\") pod \"nova-kuttl-api-0\" (UID: \"93ee5e49-16f0-402a-9d8e-6f237110e663\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.818327 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js8lh\" (UniqueName: \"kubernetes.io/projected/93ee5e49-16f0-402a-9d8e-6f237110e663-kube-api-access-js8lh\") pod \"nova-kuttl-api-0\" (UID: \"93ee5e49-16f0-402a-9d8e-6f237110e663\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.818396 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhrnh\" (UniqueName: \"kubernetes.io/projected/f1b9dee7-4afa-4bdc-88fc-f610d0bca84d-kube-api-access-mhrnh\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"f1b9dee7-4afa-4bdc-88fc-f610d0bca84d\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.819494 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93ee5e49-16f0-402a-9d8e-6f237110e663-logs\") pod \"nova-kuttl-api-0\" (UID: \"93ee5e49-16f0-402a-9d8e-6f237110e663\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.833635 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1b9dee7-4afa-4bdc-88fc-f610d0bca84d-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"f1b9dee7-4afa-4bdc-88fc-f610d0bca84d\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.833717 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93ee5e49-16f0-402a-9d8e-6f237110e663-config-data\") pod \"nova-kuttl-api-0\" (UID: \"93ee5e49-16f0-402a-9d8e-6f237110e663\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.838370 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhrnh\" (UniqueName: \"kubernetes.io/projected/f1b9dee7-4afa-4bdc-88fc-f610d0bca84d-kube-api-access-mhrnh\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"f1b9dee7-4afa-4bdc-88fc-f610d0bca84d\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.839788 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js8lh\" (UniqueName: \"kubernetes.io/projected/93ee5e49-16f0-402a-9d8e-6f237110e663-kube-api-access-js8lh\") pod \"nova-kuttl-api-0\" (UID: \"93ee5e49-16f0-402a-9d8e-6f237110e663\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.904271 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.920059 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3bbc7d7-fc9d-490e-9610-55805e5e876c-config-data\") pod \"a3bbc7d7-fc9d-490e-9610-55805e5e876c\" (UID: \"a3bbc7d7-fc9d-490e-9610-55805e5e876c\") " Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.920126 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzqmq\" (UniqueName: \"kubernetes.io/projected/a3bbc7d7-fc9d-490e-9610-55805e5e876c-kube-api-access-vzqmq\") pod \"a3bbc7d7-fc9d-490e-9610-55805e5e876c\" (UID: \"a3bbc7d7-fc9d-490e-9610-55805e5e876c\") " Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.926107 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3bbc7d7-fc9d-490e-9610-55805e5e876c-kube-api-access-vzqmq" (OuterVolumeSpecName: "kube-api-access-vzqmq") pod "a3bbc7d7-fc9d-490e-9610-55805e5e876c" (UID: "a3bbc7d7-fc9d-490e-9610-55805e5e876c"). InnerVolumeSpecName "kube-api-access-vzqmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.932631 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.947940 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3bbc7d7-fc9d-490e-9610-55805e5e876c-config-data" (OuterVolumeSpecName: "config-data") pod "a3bbc7d7-fc9d-490e-9610-55805e5e876c" (UID: "a3bbc7d7-fc9d-490e-9610-55805e5e876c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:35:09 crc kubenswrapper[4775]: I0123 14:35:09.965896 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.022343 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3bbc7d7-fc9d-490e-9610-55805e5e876c-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.022374 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzqmq\" (UniqueName: \"kubernetes.io/projected/a3bbc7d7-fc9d-490e-9610-55805e5e876c-kube-api-access-vzqmq\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.459881 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:35:10 crc kubenswrapper[4775]: W0123 14:35:10.461036 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93ee5e49_16f0_402a_9d8e_6f237110e663.slice/crio-fadc935d0ca1313694e64e348196ad9cf5ba16ec1ffcb2fcdd1d5a9b83025e52 WatchSource:0}: Error finding container fadc935d0ca1313694e64e348196ad9cf5ba16ec1ffcb2fcdd1d5a9b83025e52: Status 404 returned error can't find the container with id fadc935d0ca1313694e64e348196ad9cf5ba16ec1ffcb2fcdd1d5a9b83025e52 Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.474225 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"8dc76b90-669a-4df4-a976-1199443a8f55","Type":"ContainerStarted","Data":"d399db7d10a3f96ad48455263cc1a2f5c347077b872b70479e4c6c0cf205a7d5"} Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.474594 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.475947 4775 generic.go:334] "Generic (PLEG): container finished" podID="a3bbc7d7-fc9d-490e-9610-55805e5e876c" containerID="c947db4e331433b229677ab1076193fcb4125ba5042f6359b54fe32fa2db3874" exitCode=0 Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.475992 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.476024 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"a3bbc7d7-fc9d-490e-9610-55805e5e876c","Type":"ContainerDied","Data":"c947db4e331433b229677ab1076193fcb4125ba5042f6359b54fe32fa2db3874"} Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.476105 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"a3bbc7d7-fc9d-490e-9610-55805e5e876c","Type":"ContainerDied","Data":"3b893ae1dbc88ba1326e6a0a0bd54925381cdc400ec55f87f58040e0b56c3ac3"} Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.476139 4775 scope.go:117] "RemoveContainer" containerID="c947db4e331433b229677ab1076193fcb4125ba5042f6359b54fe32fa2db3874" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.483380 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"93ee5e49-16f0-402a-9d8e-6f237110e663","Type":"ContainerStarted","Data":"fadc935d0ca1313694e64e348196ad9cf5ba16ec1ffcb2fcdd1d5a9b83025e52"} Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.499024 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.503730 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podStartSLOduration=2.503713459 podStartE2EDuration="2.503713459s" podCreationTimestamp="2026-01-23 14:35:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:35:10.496180486 +0000 UTC m=+1857.491009226" watchObservedRunningTime="2026-01-23 14:35:10.503713459 +0000 UTC m=+1857.498542199" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.533255 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.539042 4775 scope.go:117] "RemoveContainer" containerID="c947db4e331433b229677ab1076193fcb4125ba5042f6359b54fe32fa2db3874" Jan 23 14:35:10 crc kubenswrapper[4775]: E0123 14:35:10.539600 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c947db4e331433b229677ab1076193fcb4125ba5042f6359b54fe32fa2db3874\": container with ID starting with c947db4e331433b229677ab1076193fcb4125ba5042f6359b54fe32fa2db3874 not found: ID does not exist" containerID="c947db4e331433b229677ab1076193fcb4125ba5042f6359b54fe32fa2db3874" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.539627 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c947db4e331433b229677ab1076193fcb4125ba5042f6359b54fe32fa2db3874"} err="failed to get container status \"c947db4e331433b229677ab1076193fcb4125ba5042f6359b54fe32fa2db3874\": rpc error: code = NotFound desc = could not find container \"c947db4e331433b229677ab1076193fcb4125ba5042f6359b54fe32fa2db3874\": container with ID starting with c947db4e331433b229677ab1076193fcb4125ba5042f6359b54fe32fa2db3874 not found: ID does not exist" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.545766 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.552245 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 23 14:35:10 crc kubenswrapper[4775]: E0123 14:35:10.552618 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3bbc7d7-fc9d-490e-9610-55805e5e876c" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.552630 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3bbc7d7-fc9d-490e-9610-55805e5e876c" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.552785 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3bbc7d7-fc9d-490e-9610-55805e5e876c" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.553314 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.560020 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.563430 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-compute-fake1-compute-config-data" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.630360 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgvjr\" (UniqueName: \"kubernetes.io/projected/bde4903d-4224-4139-a444-3c5baf78ff7b-kube-api-access-qgvjr\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"bde4903d-4224-4139-a444-3c5baf78ff7b\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.630648 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bde4903d-4224-4139-a444-3c5baf78ff7b-config-data\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"bde4903d-4224-4139-a444-3c5baf78ff7b\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.731774 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bde4903d-4224-4139-a444-3c5baf78ff7b-config-data\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"bde4903d-4224-4139-a444-3c5baf78ff7b\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.731941 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgvjr\" (UniqueName: \"kubernetes.io/projected/bde4903d-4224-4139-a444-3c5baf78ff7b-kube-api-access-qgvjr\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"bde4903d-4224-4139-a444-3c5baf78ff7b\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.735669 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bde4903d-4224-4139-a444-3c5baf78ff7b-config-data\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"bde4903d-4224-4139-a444-3c5baf78ff7b\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.754348 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nd8ng"] Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.756021 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nd8ng" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.762335 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgvjr\" (UniqueName: \"kubernetes.io/projected/bde4903d-4224-4139-a444-3c5baf78ff7b-kube-api-access-qgvjr\") pod \"nova-kuttl-cell1-compute-fake1-compute-0\" (UID: \"bde4903d-4224-4139-a444-3c5baf78ff7b\") " pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.765962 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nd8ng"] Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.833883 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4561aa6c-c92c-4005-8587-a8367a331257-catalog-content\") pod \"certified-operators-nd8ng\" (UID: \"4561aa6c-c92c-4005-8587-a8367a331257\") " pod="openshift-marketplace/certified-operators-nd8ng" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.833971 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4561aa6c-c92c-4005-8587-a8367a331257-utilities\") pod \"certified-operators-nd8ng\" (UID: \"4561aa6c-c92c-4005-8587-a8367a331257\") " pod="openshift-marketplace/certified-operators-nd8ng" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.834019 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwrjm\" (UniqueName: \"kubernetes.io/projected/4561aa6c-c92c-4005-8587-a8367a331257-kube-api-access-gwrjm\") pod \"certified-operators-nd8ng\" (UID: \"4561aa6c-c92c-4005-8587-a8367a331257\") " pod="openshift-marketplace/certified-operators-nd8ng" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.884170 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.936465 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4561aa6c-c92c-4005-8587-a8367a331257-utilities\") pod \"certified-operators-nd8ng\" (UID: \"4561aa6c-c92c-4005-8587-a8367a331257\") " pod="openshift-marketplace/certified-operators-nd8ng" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.936533 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwrjm\" (UniqueName: \"kubernetes.io/projected/4561aa6c-c92c-4005-8587-a8367a331257-kube-api-access-gwrjm\") pod \"certified-operators-nd8ng\" (UID: \"4561aa6c-c92c-4005-8587-a8367a331257\") " pod="openshift-marketplace/certified-operators-nd8ng" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.936593 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4561aa6c-c92c-4005-8587-a8367a331257-catalog-content\") pod \"certified-operators-nd8ng\" (UID: \"4561aa6c-c92c-4005-8587-a8367a331257\") " pod="openshift-marketplace/certified-operators-nd8ng" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.937417 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4561aa6c-c92c-4005-8587-a8367a331257-catalog-content\") pod \"certified-operators-nd8ng\" (UID: \"4561aa6c-c92c-4005-8587-a8367a331257\") " pod="openshift-marketplace/certified-operators-nd8ng" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.937772 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4561aa6c-c92c-4005-8587-a8367a331257-utilities\") pod \"certified-operators-nd8ng\" (UID: \"4561aa6c-c92c-4005-8587-a8367a331257\") " pod="openshift-marketplace/certified-operators-nd8ng" Jan 23 14:35:10 crc kubenswrapper[4775]: I0123 14:35:10.954508 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwrjm\" (UniqueName: \"kubernetes.io/projected/4561aa6c-c92c-4005-8587-a8367a331257-kube-api-access-gwrjm\") pod \"certified-operators-nd8ng\" (UID: \"4561aa6c-c92c-4005-8587-a8367a331257\") " pod="openshift-marketplace/certified-operators-nd8ng" Jan 23 14:35:11 crc kubenswrapper[4775]: I0123 14:35:11.174764 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nd8ng" Jan 23 14:35:11 crc kubenswrapper[4775]: I0123 14:35:11.328055 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 23 14:35:11 crc kubenswrapper[4775]: W0123 14:35:11.334494 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbde4903d_4224_4139_a444_3c5baf78ff7b.slice/crio-6eb0a59b18194a13bbf978de13cdca6d55273f8b0946c59e7a3ffc58619e5617 WatchSource:0}: Error finding container 6eb0a59b18194a13bbf978de13cdca6d55273f8b0946c59e7a3ffc58619e5617: Status 404 returned error can't find the container with id 6eb0a59b18194a13bbf978de13cdca6d55273f8b0946c59e7a3ffc58619e5617 Jan 23 14:35:11 crc kubenswrapper[4775]: I0123 14:35:11.489919 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nd8ng"] Jan 23 14:35:11 crc kubenswrapper[4775]: W0123 14:35:11.491750 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4561aa6c_c92c_4005_8587_a8367a331257.slice/crio-e139d0816faaf7bfd497ac42998f4b734f9ed93f619125ddbc81d602777ae54c WatchSource:0}: Error finding container e139d0816faaf7bfd497ac42998f4b734f9ed93f619125ddbc81d602777ae54c: Status 404 returned error can't find the container with id e139d0816faaf7bfd497ac42998f4b734f9ed93f619125ddbc81d602777ae54c Jan 23 14:35:11 crc kubenswrapper[4775]: I0123 14:35:11.524929 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"f1b9dee7-4afa-4bdc-88fc-f610d0bca84d","Type":"ContainerStarted","Data":"575370c07292e3d956d8a0e40335b6219090d6e10fbe3d288c76deb77fcfe67e"} Jan 23 14:35:11 crc kubenswrapper[4775]: I0123 14:35:11.524976 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"f1b9dee7-4afa-4bdc-88fc-f610d0bca84d","Type":"ContainerStarted","Data":"02382e22f435f1e3a7c73d28641f54e87db1dd32276e640504ea0f19f830c722"} Jan 23 14:35:11 crc kubenswrapper[4775]: I0123 14:35:11.526662 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:35:11 crc kubenswrapper[4775]: I0123 14:35:11.531992 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nd8ng" event={"ID":"4561aa6c-c92c-4005-8587-a8367a331257","Type":"ContainerStarted","Data":"e139d0816faaf7bfd497ac42998f4b734f9ed93f619125ddbc81d602777ae54c"} Jan 23 14:35:11 crc kubenswrapper[4775]: I0123 14:35:11.539486 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"93ee5e49-16f0-402a-9d8e-6f237110e663","Type":"ContainerStarted","Data":"5a0c9d73c99e74b57defba56af031189ee12f4eb97f9a8df2f62a83574ffa9a2"} Jan 23 14:35:11 crc kubenswrapper[4775]: I0123 14:35:11.539528 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"93ee5e49-16f0-402a-9d8e-6f237110e663","Type":"ContainerStarted","Data":"aa0a614b45a14d37314ee88b48d9cdfd5a2ac59674285aa0bcd8f730765f5458"} Jan 23 14:35:11 crc kubenswrapper[4775]: I0123 14:35:11.552950 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"bde4903d-4224-4139-a444-3c5baf78ff7b","Type":"ContainerStarted","Data":"6eb0a59b18194a13bbf978de13cdca6d55273f8b0946c59e7a3ffc58619e5617"} Jan 23 14:35:11 crc kubenswrapper[4775]: I0123 14:35:11.572653 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.572635979 podStartE2EDuration="2.572635979s" podCreationTimestamp="2026-01-23 14:35:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:35:11.572021182 +0000 UTC m=+1858.566849922" watchObservedRunningTime="2026-01-23 14:35:11.572635979 +0000 UTC m=+1858.567464719" Jan 23 14:35:11 crc kubenswrapper[4775]: I0123 14:35:11.573734 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podStartSLOduration=2.5737304979999998 podStartE2EDuration="2.573730498s" podCreationTimestamp="2026-01-23 14:35:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:35:11.544940753 +0000 UTC m=+1858.539769493" watchObservedRunningTime="2026-01-23 14:35:11.573730498 +0000 UTC m=+1858.568559238" Jan 23 14:35:11 crc kubenswrapper[4775]: I0123 14:35:11.730952 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3bbc7d7-fc9d-490e-9610-55805e5e876c" path="/var/lib/kubelet/pods/a3bbc7d7-fc9d-490e-9610-55805e5e876c/volumes" Jan 23 14:35:12 crc kubenswrapper[4775]: I0123 14:35:12.566633 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"bde4903d-4224-4139-a444-3c5baf78ff7b","Type":"ContainerStarted","Data":"86f7dc44e36aa4ad8a9b68c7e60260e8bf1d3fc6fbcb2e1071f96de63df5b107"} Jan 23 14:35:12 crc kubenswrapper[4775]: I0123 14:35:12.568276 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:35:12 crc kubenswrapper[4775]: I0123 14:35:12.573795 4775 generic.go:334] "Generic (PLEG): container finished" podID="4561aa6c-c92c-4005-8587-a8367a331257" containerID="d86b7852f950b38bf17633f226980afe5a97aebd085dea51e06ffca20bbd08f6" exitCode=0 Jan 23 14:35:12 crc kubenswrapper[4775]: I0123 14:35:12.573930 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nd8ng" event={"ID":"4561aa6c-c92c-4005-8587-a8367a331257","Type":"ContainerDied","Data":"d86b7852f950b38bf17633f226980afe5a97aebd085dea51e06ffca20bbd08f6"} Jan 23 14:35:12 crc kubenswrapper[4775]: I0123 14:35:12.600727 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podStartSLOduration=2.600703348 podStartE2EDuration="2.600703348s" podCreationTimestamp="2026-01-23 14:35:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:35:12.589571168 +0000 UTC m=+1859.584399948" watchObservedRunningTime="2026-01-23 14:35:12.600703348 +0000 UTC m=+1859.595532128" Jan 23 14:35:12 crc kubenswrapper[4775]: I0123 14:35:12.642472 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:35:12 crc kubenswrapper[4775]: I0123 14:35:12.843010 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.158924 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nmtbr"] Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.161054 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nmtbr" Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.183382 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nmtbr"] Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.183866 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt6gq\" (UniqueName: \"kubernetes.io/projected/0fa87919-c37c-422f-8c5d-f5f54162a229-kube-api-access-vt6gq\") pod \"community-operators-nmtbr\" (UID: \"0fa87919-c37c-422f-8c5d-f5f54162a229\") " pod="openshift-marketplace/community-operators-nmtbr" Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.184013 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fa87919-c37c-422f-8c5d-f5f54162a229-utilities\") pod \"community-operators-nmtbr\" (UID: \"0fa87919-c37c-422f-8c5d-f5f54162a229\") " pod="openshift-marketplace/community-operators-nmtbr" Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.184083 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fa87919-c37c-422f-8c5d-f5f54162a229-catalog-content\") pod \"community-operators-nmtbr\" (UID: \"0fa87919-c37c-422f-8c5d-f5f54162a229\") " pod="openshift-marketplace/community-operators-nmtbr" Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.285745 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt6gq\" (UniqueName: \"kubernetes.io/projected/0fa87919-c37c-422f-8c5d-f5f54162a229-kube-api-access-vt6gq\") pod \"community-operators-nmtbr\" (UID: \"0fa87919-c37c-422f-8c5d-f5f54162a229\") " pod="openshift-marketplace/community-operators-nmtbr" Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.285870 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fa87919-c37c-422f-8c5d-f5f54162a229-utilities\") pod \"community-operators-nmtbr\" (UID: \"0fa87919-c37c-422f-8c5d-f5f54162a229\") " pod="openshift-marketplace/community-operators-nmtbr" Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.285910 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fa87919-c37c-422f-8c5d-f5f54162a229-catalog-content\") pod \"community-operators-nmtbr\" (UID: \"0fa87919-c37c-422f-8c5d-f5f54162a229\") " pod="openshift-marketplace/community-operators-nmtbr" Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.286329 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fa87919-c37c-422f-8c5d-f5f54162a229-catalog-content\") pod \"community-operators-nmtbr\" (UID: \"0fa87919-c37c-422f-8c5d-f5f54162a229\") " pod="openshift-marketplace/community-operators-nmtbr" Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.286652 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fa87919-c37c-422f-8c5d-f5f54162a229-utilities\") pod \"community-operators-nmtbr\" (UID: \"0fa87919-c37c-422f-8c5d-f5f54162a229\") " pod="openshift-marketplace/community-operators-nmtbr" Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.312582 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt6gq\" (UniqueName: \"kubernetes.io/projected/0fa87919-c37c-422f-8c5d-f5f54162a229-kube-api-access-vt6gq\") pod \"community-operators-nmtbr\" (UID: \"0fa87919-c37c-422f-8c5d-f5f54162a229\") " pod="openshift-marketplace/community-operators-nmtbr" Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.354012 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kzsg7"] Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.355678 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kzsg7" Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.372755 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzsg7"] Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.387790 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d91e4cde-f59f-4bc9-9f11-bc05386b065c-utilities\") pod \"redhat-marketplace-kzsg7\" (UID: \"d91e4cde-f59f-4bc9-9f11-bc05386b065c\") " pod="openshift-marketplace/redhat-marketplace-kzsg7" Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.388001 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5csb\" (UniqueName: \"kubernetes.io/projected/d91e4cde-f59f-4bc9-9f11-bc05386b065c-kube-api-access-b5csb\") pod \"redhat-marketplace-kzsg7\" (UID: \"d91e4cde-f59f-4bc9-9f11-bc05386b065c\") " pod="openshift-marketplace/redhat-marketplace-kzsg7" Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.388109 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d91e4cde-f59f-4bc9-9f11-bc05386b065c-catalog-content\") pod \"redhat-marketplace-kzsg7\" (UID: \"d91e4cde-f59f-4bc9-9f11-bc05386b065c\") " pod="openshift-marketplace/redhat-marketplace-kzsg7" Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.489635 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nmtbr" Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.489957 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d91e4cde-f59f-4bc9-9f11-bc05386b065c-catalog-content\") pod \"redhat-marketplace-kzsg7\" (UID: \"d91e4cde-f59f-4bc9-9f11-bc05386b065c\") " pod="openshift-marketplace/redhat-marketplace-kzsg7" Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.490016 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d91e4cde-f59f-4bc9-9f11-bc05386b065c-utilities\") pod \"redhat-marketplace-kzsg7\" (UID: \"d91e4cde-f59f-4bc9-9f11-bc05386b065c\") " pod="openshift-marketplace/redhat-marketplace-kzsg7" Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.490080 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5csb\" (UniqueName: \"kubernetes.io/projected/d91e4cde-f59f-4bc9-9f11-bc05386b065c-kube-api-access-b5csb\") pod \"redhat-marketplace-kzsg7\" (UID: \"d91e4cde-f59f-4bc9-9f11-bc05386b065c\") " pod="openshift-marketplace/redhat-marketplace-kzsg7" Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.490457 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d91e4cde-f59f-4bc9-9f11-bc05386b065c-catalog-content\") pod \"redhat-marketplace-kzsg7\" (UID: \"d91e4cde-f59f-4bc9-9f11-bc05386b065c\") " pod="openshift-marketplace/redhat-marketplace-kzsg7" Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.490578 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d91e4cde-f59f-4bc9-9f11-bc05386b065c-utilities\") pod \"redhat-marketplace-kzsg7\" (UID: \"d91e4cde-f59f-4bc9-9f11-bc05386b065c\") " pod="openshift-marketplace/redhat-marketplace-kzsg7" Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.511508 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5csb\" (UniqueName: \"kubernetes.io/projected/d91e4cde-f59f-4bc9-9f11-bc05386b065c-kube-api-access-b5csb\") pod \"redhat-marketplace-kzsg7\" (UID: \"d91e4cde-f59f-4bc9-9f11-bc05386b065c\") " pod="openshift-marketplace/redhat-marketplace-kzsg7" Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.599221 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nd8ng" event={"ID":"4561aa6c-c92c-4005-8587-a8367a331257","Type":"ContainerStarted","Data":"9970a01e583becbfe2474b23c43d2606a65ffdd1b62802118ea464e68db123a0"} Jan 23 14:35:13 crc kubenswrapper[4775]: I0123 14:35:13.680041 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kzsg7" Jan 23 14:35:14 crc kubenswrapper[4775]: I0123 14:35:14.259550 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nmtbr"] Jan 23 14:35:14 crc kubenswrapper[4775]: I0123 14:35:14.361288 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzsg7"] Jan 23 14:35:14 crc kubenswrapper[4775]: W0123 14:35:14.366446 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd91e4cde_f59f_4bc9_9f11_bc05386b065c.slice/crio-3b068f6ecde903e50cee9692d31810d6118d53fd385280961985c877c641c0f9 WatchSource:0}: Error finding container 3b068f6ecde903e50cee9692d31810d6118d53fd385280961985c877c641c0f9: Status 404 returned error can't find the container with id 3b068f6ecde903e50cee9692d31810d6118d53fd385280961985c877c641c0f9 Jan 23 14:35:14 crc kubenswrapper[4775]: I0123 14:35:14.608060 4775 generic.go:334] "Generic (PLEG): container finished" podID="4561aa6c-c92c-4005-8587-a8367a331257" containerID="9970a01e583becbfe2474b23c43d2606a65ffdd1b62802118ea464e68db123a0" exitCode=0 Jan 23 14:35:14 crc kubenswrapper[4775]: I0123 14:35:14.608129 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nd8ng" event={"ID":"4561aa6c-c92c-4005-8587-a8367a331257","Type":"ContainerDied","Data":"9970a01e583becbfe2474b23c43d2606a65ffdd1b62802118ea464e68db123a0"} Jan 23 14:35:14 crc kubenswrapper[4775]: I0123 14:35:14.609125 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzsg7" event={"ID":"d91e4cde-f59f-4bc9-9f11-bc05386b065c","Type":"ContainerStarted","Data":"ada03641fce0fa691409ed399e7d688cfdebf997e0d324b6a8ee7ed3d292e94c"} Jan 23 14:35:14 crc kubenswrapper[4775]: I0123 14:35:14.609148 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzsg7" event={"ID":"d91e4cde-f59f-4bc9-9f11-bc05386b065c","Type":"ContainerStarted","Data":"3b068f6ecde903e50cee9692d31810d6118d53fd385280961985c877c641c0f9"} Jan 23 14:35:14 crc kubenswrapper[4775]: I0123 14:35:14.612642 4775 generic.go:334] "Generic (PLEG): container finished" podID="0fa87919-c37c-422f-8c5d-f5f54162a229" containerID="9fd15c51163da7b67d0215d00ae11cb461ae55a7ced3abddf9afbf2b1caac92d" exitCode=0 Jan 23 14:35:14 crc kubenswrapper[4775]: I0123 14:35:14.612882 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmtbr" event={"ID":"0fa87919-c37c-422f-8c5d-f5f54162a229","Type":"ContainerDied","Data":"9fd15c51163da7b67d0215d00ae11cb461ae55a7ced3abddf9afbf2b1caac92d"} Jan 23 14:35:14 crc kubenswrapper[4775]: I0123 14:35:14.612903 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmtbr" event={"ID":"0fa87919-c37c-422f-8c5d-f5f54162a229","Type":"ContainerStarted","Data":"fd675cafec4d98add23e159f65f402c9edc343315ba027b2b7ac636cb9573a20"} Jan 23 14:35:15 crc kubenswrapper[4775]: I0123 14:35:15.623003 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmtbr" event={"ID":"0fa87919-c37c-422f-8c5d-f5f54162a229","Type":"ContainerStarted","Data":"2342482df363e816214bfa63cd48acba94e6573b39d29a37fe7dd668f947c7ec"} Jan 23 14:35:15 crc kubenswrapper[4775]: I0123 14:35:15.626869 4775 generic.go:334] "Generic (PLEG): container finished" podID="bde4903d-4224-4139-a444-3c5baf78ff7b" containerID="86f7dc44e36aa4ad8a9b68c7e60260e8bf1d3fc6fbcb2e1071f96de63df5b107" exitCode=0 Jan 23 14:35:15 crc kubenswrapper[4775]: I0123 14:35:15.626952 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"bde4903d-4224-4139-a444-3c5baf78ff7b","Type":"ContainerDied","Data":"86f7dc44e36aa4ad8a9b68c7e60260e8bf1d3fc6fbcb2e1071f96de63df5b107"} Jan 23 14:35:15 crc kubenswrapper[4775]: I0123 14:35:15.627518 4775 scope.go:117] "RemoveContainer" containerID="86f7dc44e36aa4ad8a9b68c7e60260e8bf1d3fc6fbcb2e1071f96de63df5b107" Jan 23 14:35:15 crc kubenswrapper[4775]: I0123 14:35:15.631082 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nd8ng" event={"ID":"4561aa6c-c92c-4005-8587-a8367a331257","Type":"ContainerStarted","Data":"82785ee402f14c834fde416737c772f725122a72eb1b01032cbbd13aa84e6cea"} Jan 23 14:35:15 crc kubenswrapper[4775]: I0123 14:35:15.635241 4775 generic.go:334] "Generic (PLEG): container finished" podID="d91e4cde-f59f-4bc9-9f11-bc05386b065c" containerID="ada03641fce0fa691409ed399e7d688cfdebf997e0d324b6a8ee7ed3d292e94c" exitCode=0 Jan 23 14:35:15 crc kubenswrapper[4775]: I0123 14:35:15.635287 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzsg7" event={"ID":"d91e4cde-f59f-4bc9-9f11-bc05386b065c","Type":"ContainerDied","Data":"ada03641fce0fa691409ed399e7d688cfdebf997e0d324b6a8ee7ed3d292e94c"} Jan 23 14:35:15 crc kubenswrapper[4775]: I0123 14:35:15.713423 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nd8ng" podStartSLOduration=3.22246622 podStartE2EDuration="5.713396393s" podCreationTimestamp="2026-01-23 14:35:10 +0000 UTC" firstStartedPulling="2026-01-23 14:35:12.576435624 +0000 UTC m=+1859.571264404" lastFinishedPulling="2026-01-23 14:35:15.067365837 +0000 UTC m=+1862.062194577" observedRunningTime="2026-01-23 14:35:15.710052773 +0000 UTC m=+1862.704881523" watchObservedRunningTime="2026-01-23 14:35:15.713396393 +0000 UTC m=+1862.708225163" Jan 23 14:35:15 crc kubenswrapper[4775]: I0123 14:35:15.885545 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:35:16 crc kubenswrapper[4775]: I0123 14:35:16.651486 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"bde4903d-4224-4139-a444-3c5baf78ff7b","Type":"ContainerStarted","Data":"fb284da39186f2ab9d4d50e0c08df4cb63745374c070a74a4239a3a6536ab15f"} Jan 23 14:35:16 crc kubenswrapper[4775]: I0123 14:35:16.652057 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:35:16 crc kubenswrapper[4775]: I0123 14:35:16.657004 4775 generic.go:334] "Generic (PLEG): container finished" podID="d91e4cde-f59f-4bc9-9f11-bc05386b065c" containerID="0f16ae8937f5fb70cd5743359a2f7a31c4f3df2c152d44a3cc32f6e7bc378055" exitCode=0 Jan 23 14:35:16 crc kubenswrapper[4775]: I0123 14:35:16.657127 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzsg7" event={"ID":"d91e4cde-f59f-4bc9-9f11-bc05386b065c","Type":"ContainerDied","Data":"0f16ae8937f5fb70cd5743359a2f7a31c4f3df2c152d44a3cc32f6e7bc378055"} Jan 23 14:35:16 crc kubenswrapper[4775]: I0123 14:35:16.665677 4775 generic.go:334] "Generic (PLEG): container finished" podID="0fa87919-c37c-422f-8c5d-f5f54162a229" containerID="2342482df363e816214bfa63cd48acba94e6573b39d29a37fe7dd668f947c7ec" exitCode=0 Jan 23 14:35:16 crc kubenswrapper[4775]: I0123 14:35:16.667413 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmtbr" event={"ID":"0fa87919-c37c-422f-8c5d-f5f54162a229","Type":"ContainerDied","Data":"2342482df363e816214bfa63cd48acba94e6573b39d29a37fe7dd668f947c7ec"} Jan 23 14:35:16 crc kubenswrapper[4775]: I0123 14:35:16.694730 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:35:17 crc kubenswrapper[4775]: I0123 14:35:17.674860 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzsg7" event={"ID":"d91e4cde-f59f-4bc9-9f11-bc05386b065c","Type":"ContainerStarted","Data":"046c1051d3c02cded54b9aeb6c0f3033ce2b334c91ae79498769d193f70da826"} Jan 23 14:35:17 crc kubenswrapper[4775]: I0123 14:35:17.678371 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmtbr" event={"ID":"0fa87919-c37c-422f-8c5d-f5f54162a229","Type":"ContainerStarted","Data":"9c3b807319ac23515db33902dbe750669bfbce758abed195bb2690280ffd34b0"} Jan 23 14:35:17 crc kubenswrapper[4775]: I0123 14:35:17.705425 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kzsg7" podStartSLOduration=3.179606103 podStartE2EDuration="4.705401943s" podCreationTimestamp="2026-01-23 14:35:13 +0000 UTC" firstStartedPulling="2026-01-23 14:35:15.637540719 +0000 UTC m=+1862.632369459" lastFinishedPulling="2026-01-23 14:35:17.163336549 +0000 UTC m=+1864.158165299" observedRunningTime="2026-01-23 14:35:17.700782189 +0000 UTC m=+1864.695610929" watchObservedRunningTime="2026-01-23 14:35:17.705401943 +0000 UTC m=+1864.700230693" Jan 23 14:35:17 crc kubenswrapper[4775]: I0123 14:35:17.843168 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:35:17 crc kubenswrapper[4775]: I0123 14:35:17.897100 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:35:17 crc kubenswrapper[4775]: I0123 14:35:17.934937 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nmtbr" podStartSLOduration=2.400378958 podStartE2EDuration="4.934908987s" podCreationTimestamp="2026-01-23 14:35:13 +0000 UTC" firstStartedPulling="2026-01-23 14:35:14.614569847 +0000 UTC m=+1861.609398617" lastFinishedPulling="2026-01-23 14:35:17.149099886 +0000 UTC m=+1864.143928646" observedRunningTime="2026-01-23 14:35:17.734274471 +0000 UTC m=+1864.729103221" watchObservedRunningTime="2026-01-23 14:35:17.934908987 +0000 UTC m=+1864.929737767" Jan 23 14:35:18 crc kubenswrapper[4775]: I0123 14:35:18.713483 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:35:18 crc kubenswrapper[4775]: I0123 14:35:18.893879 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:35:19 crc kubenswrapper[4775]: I0123 14:35:19.933910 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:19 crc kubenswrapper[4775]: I0123 14:35:19.934259 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:19 crc kubenswrapper[4775]: I0123 14:35:19.999440 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:35:20 crc kubenswrapper[4775]: I0123 14:35:20.710666 4775 generic.go:334] "Generic (PLEG): container finished" podID="bde4903d-4224-4139-a444-3c5baf78ff7b" containerID="fb284da39186f2ab9d4d50e0c08df4cb63745374c070a74a4239a3a6536ab15f" exitCode=0 Jan 23 14:35:20 crc kubenswrapper[4775]: I0123 14:35:20.710730 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"bde4903d-4224-4139-a444-3c5baf78ff7b","Type":"ContainerDied","Data":"fb284da39186f2ab9d4d50e0c08df4cb63745374c070a74a4239a3a6536ab15f"} Jan 23 14:35:20 crc kubenswrapper[4775]: I0123 14:35:20.710774 4775 scope.go:117] "RemoveContainer" containerID="86f7dc44e36aa4ad8a9b68c7e60260e8bf1d3fc6fbcb2e1071f96de63df5b107" Jan 23 14:35:20 crc kubenswrapper[4775]: I0123 14:35:20.711900 4775 scope.go:117] "RemoveContainer" containerID="fb284da39186f2ab9d4d50e0c08df4cb63745374c070a74a4239a3a6536ab15f" Jan 23 14:35:20 crc kubenswrapper[4775]: E0123 14:35:20.712178 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-kuttl-cell1-compute-fake1-compute-compute\" with CrashLoopBackOff: \"back-off 10s restarting failed container=nova-kuttl-cell1-compute-fake1-compute-compute pod=nova-kuttl-cell1-compute-fake1-compute-0_nova-kuttl-default(bde4903d-4224-4139-a444-3c5baf78ff7b)\"" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="bde4903d-4224-4139-a444-3c5baf78ff7b" Jan 23 14:35:20 crc kubenswrapper[4775]: I0123 14:35:20.885503 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:35:20 crc kubenswrapper[4775]: I0123 14:35:20.885560 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:35:21 crc kubenswrapper[4775]: I0123 14:35:21.016037 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="93ee5e49-16f0-402a-9d8e-6f237110e663" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.202:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:35:21 crc kubenswrapper[4775]: I0123 14:35:21.016070 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="93ee5e49-16f0-402a-9d8e-6f237110e663" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.202:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:35:21 crc kubenswrapper[4775]: I0123 14:35:21.174900 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nd8ng" Jan 23 14:35:21 crc kubenswrapper[4775]: I0123 14:35:21.174935 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nd8ng" Jan 23 14:35:21 crc kubenswrapper[4775]: I0123 14:35:21.250432 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nd8ng" Jan 23 14:35:21 crc kubenswrapper[4775]: I0123 14:35:21.748274 4775 scope.go:117] "RemoveContainer" containerID="fb284da39186f2ab9d4d50e0c08df4cb63745374c070a74a4239a3a6536ab15f" Jan 23 14:35:21 crc kubenswrapper[4775]: E0123 14:35:21.748693 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-kuttl-cell1-compute-fake1-compute-compute\" with CrashLoopBackOff: \"back-off 10s restarting failed container=nova-kuttl-cell1-compute-fake1-compute-compute pod=nova-kuttl-cell1-compute-fake1-compute-0_nova-kuttl-default(bde4903d-4224-4139-a444-3c5baf78ff7b)\"" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="bde4903d-4224-4139-a444-3c5baf78ff7b" Jan 23 14:35:21 crc kubenswrapper[4775]: I0123 14:35:21.820020 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nd8ng" Jan 23 14:35:22 crc kubenswrapper[4775]: I0123 14:35:22.946401 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nd8ng"] Jan 23 14:35:23 crc kubenswrapper[4775]: I0123 14:35:23.489930 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nmtbr" Jan 23 14:35:23 crc kubenswrapper[4775]: I0123 14:35:23.489995 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nmtbr" Jan 23 14:35:23 crc kubenswrapper[4775]: I0123 14:35:23.587704 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nmtbr" Jan 23 14:35:23 crc kubenswrapper[4775]: I0123 14:35:23.681374 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kzsg7" Jan 23 14:35:23 crc kubenswrapper[4775]: I0123 14:35:23.681490 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kzsg7" Jan 23 14:35:23 crc kubenswrapper[4775]: I0123 14:35:23.744798 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kzsg7" Jan 23 14:35:23 crc kubenswrapper[4775]: I0123 14:35:23.783338 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nd8ng" podUID="4561aa6c-c92c-4005-8587-a8367a331257" containerName="registry-server" containerID="cri-o://82785ee402f14c834fde416737c772f725122a72eb1b01032cbbd13aa84e6cea" gracePeriod=2 Jan 23 14:35:23 crc kubenswrapper[4775]: I0123 14:35:23.853860 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nmtbr" Jan 23 14:35:23 crc kubenswrapper[4775]: I0123 14:35:23.860259 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kzsg7" Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.241643 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nd8ng" Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.294491 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4561aa6c-c92c-4005-8587-a8367a331257-utilities\") pod \"4561aa6c-c92c-4005-8587-a8367a331257\" (UID: \"4561aa6c-c92c-4005-8587-a8367a331257\") " Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.294746 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4561aa6c-c92c-4005-8587-a8367a331257-catalog-content\") pod \"4561aa6c-c92c-4005-8587-a8367a331257\" (UID: \"4561aa6c-c92c-4005-8587-a8367a331257\") " Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.294884 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwrjm\" (UniqueName: \"kubernetes.io/projected/4561aa6c-c92c-4005-8587-a8367a331257-kube-api-access-gwrjm\") pod \"4561aa6c-c92c-4005-8587-a8367a331257\" (UID: \"4561aa6c-c92c-4005-8587-a8367a331257\") " Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.295870 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4561aa6c-c92c-4005-8587-a8367a331257-utilities" (OuterVolumeSpecName: "utilities") pod "4561aa6c-c92c-4005-8587-a8367a331257" (UID: "4561aa6c-c92c-4005-8587-a8367a331257"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.303349 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4561aa6c-c92c-4005-8587-a8367a331257-kube-api-access-gwrjm" (OuterVolumeSpecName: "kube-api-access-gwrjm") pod "4561aa6c-c92c-4005-8587-a8367a331257" (UID: "4561aa6c-c92c-4005-8587-a8367a331257"). InnerVolumeSpecName "kube-api-access-gwrjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.364112 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4561aa6c-c92c-4005-8587-a8367a331257-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4561aa6c-c92c-4005-8587-a8367a331257" (UID: "4561aa6c-c92c-4005-8587-a8367a331257"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.397014 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwrjm\" (UniqueName: \"kubernetes.io/projected/4561aa6c-c92c-4005-8587-a8367a331257-kube-api-access-gwrjm\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.397050 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4561aa6c-c92c-4005-8587-a8367a331257-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.397061 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4561aa6c-c92c-4005-8587-a8367a331257-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.796762 4775 generic.go:334] "Generic (PLEG): container finished" podID="4561aa6c-c92c-4005-8587-a8367a331257" containerID="82785ee402f14c834fde416737c772f725122a72eb1b01032cbbd13aa84e6cea" exitCode=0 Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.796884 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nd8ng" Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.796913 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nd8ng" event={"ID":"4561aa6c-c92c-4005-8587-a8367a331257","Type":"ContainerDied","Data":"82785ee402f14c834fde416737c772f725122a72eb1b01032cbbd13aa84e6cea"} Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.797370 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nd8ng" event={"ID":"4561aa6c-c92c-4005-8587-a8367a331257","Type":"ContainerDied","Data":"e139d0816faaf7bfd497ac42998f4b734f9ed93f619125ddbc81d602777ae54c"} Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.797405 4775 scope.go:117] "RemoveContainer" containerID="82785ee402f14c834fde416737c772f725122a72eb1b01032cbbd13aa84e6cea" Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.823875 4775 scope.go:117] "RemoveContainer" containerID="9970a01e583becbfe2474b23c43d2606a65ffdd1b62802118ea464e68db123a0" Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.837067 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nd8ng"] Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.848213 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nd8ng"] Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.871387 4775 scope.go:117] "RemoveContainer" containerID="d86b7852f950b38bf17633f226980afe5a97aebd085dea51e06ffca20bbd08f6" Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.887571 4775 scope.go:117] "RemoveContainer" containerID="82785ee402f14c834fde416737c772f725122a72eb1b01032cbbd13aa84e6cea" Jan 23 14:35:24 crc kubenswrapper[4775]: E0123 14:35:24.888305 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82785ee402f14c834fde416737c772f725122a72eb1b01032cbbd13aa84e6cea\": container with ID starting with 82785ee402f14c834fde416737c772f725122a72eb1b01032cbbd13aa84e6cea not found: ID does not exist" containerID="82785ee402f14c834fde416737c772f725122a72eb1b01032cbbd13aa84e6cea" Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.888347 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82785ee402f14c834fde416737c772f725122a72eb1b01032cbbd13aa84e6cea"} err="failed to get container status \"82785ee402f14c834fde416737c772f725122a72eb1b01032cbbd13aa84e6cea\": rpc error: code = NotFound desc = could not find container \"82785ee402f14c834fde416737c772f725122a72eb1b01032cbbd13aa84e6cea\": container with ID starting with 82785ee402f14c834fde416737c772f725122a72eb1b01032cbbd13aa84e6cea not found: ID does not exist" Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.888373 4775 scope.go:117] "RemoveContainer" containerID="9970a01e583becbfe2474b23c43d2606a65ffdd1b62802118ea464e68db123a0" Jan 23 14:35:24 crc kubenswrapper[4775]: E0123 14:35:24.888958 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9970a01e583becbfe2474b23c43d2606a65ffdd1b62802118ea464e68db123a0\": container with ID starting with 9970a01e583becbfe2474b23c43d2606a65ffdd1b62802118ea464e68db123a0 not found: ID does not exist" containerID="9970a01e583becbfe2474b23c43d2606a65ffdd1b62802118ea464e68db123a0" Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.888979 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9970a01e583becbfe2474b23c43d2606a65ffdd1b62802118ea464e68db123a0"} err="failed to get container status \"9970a01e583becbfe2474b23c43d2606a65ffdd1b62802118ea464e68db123a0\": rpc error: code = NotFound desc = could not find container \"9970a01e583becbfe2474b23c43d2606a65ffdd1b62802118ea464e68db123a0\": container with ID starting with 9970a01e583becbfe2474b23c43d2606a65ffdd1b62802118ea464e68db123a0 not found: ID does not exist" Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.889007 4775 scope.go:117] "RemoveContainer" containerID="d86b7852f950b38bf17633f226980afe5a97aebd085dea51e06ffca20bbd08f6" Jan 23 14:35:24 crc kubenswrapper[4775]: E0123 14:35:24.889722 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d86b7852f950b38bf17633f226980afe5a97aebd085dea51e06ffca20bbd08f6\": container with ID starting with d86b7852f950b38bf17633f226980afe5a97aebd085dea51e06ffca20bbd08f6 not found: ID does not exist" containerID="d86b7852f950b38bf17633f226980afe5a97aebd085dea51e06ffca20bbd08f6" Jan 23 14:35:24 crc kubenswrapper[4775]: I0123 14:35:24.889766 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d86b7852f950b38bf17633f226980afe5a97aebd085dea51e06ffca20bbd08f6"} err="failed to get container status \"d86b7852f950b38bf17633f226980afe5a97aebd085dea51e06ffca20bbd08f6\": rpc error: code = NotFound desc = could not find container \"d86b7852f950b38bf17633f226980afe5a97aebd085dea51e06ffca20bbd08f6\": container with ID starting with d86b7852f950b38bf17633f226980afe5a97aebd085dea51e06ffca20bbd08f6 not found: ID does not exist" Jan 23 14:35:25 crc kubenswrapper[4775]: I0123 14:35:25.727221 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4561aa6c-c92c-4005-8587-a8367a331257" path="/var/lib/kubelet/pods/4561aa6c-c92c-4005-8587-a8367a331257/volumes" Jan 23 14:35:25 crc kubenswrapper[4775]: I0123 14:35:25.949395 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nmtbr"] Jan 23 14:35:25 crc kubenswrapper[4775]: I0123 14:35:25.949650 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nmtbr" podUID="0fa87919-c37c-422f-8c5d-f5f54162a229" containerName="registry-server" containerID="cri-o://9c3b807319ac23515db33902dbe750669bfbce758abed195bb2690280ffd34b0" gracePeriod=2 Jan 23 14:35:26 crc kubenswrapper[4775]: I0123 14:35:26.827428 4775 generic.go:334] "Generic (PLEG): container finished" podID="0fa87919-c37c-422f-8c5d-f5f54162a229" containerID="9c3b807319ac23515db33902dbe750669bfbce758abed195bb2690280ffd34b0" exitCode=0 Jan 23 14:35:26 crc kubenswrapper[4775]: I0123 14:35:26.827834 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmtbr" event={"ID":"0fa87919-c37c-422f-8c5d-f5f54162a229","Type":"ContainerDied","Data":"9c3b807319ac23515db33902dbe750669bfbce758abed195bb2690280ffd34b0"} Jan 23 14:35:26 crc kubenswrapper[4775]: I0123 14:35:26.964897 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzsg7"] Jan 23 14:35:26 crc kubenswrapper[4775]: I0123 14:35:26.965521 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kzsg7" podUID="d91e4cde-f59f-4bc9-9f11-bc05386b065c" containerName="registry-server" containerID="cri-o://046c1051d3c02cded54b9aeb6c0f3033ce2b334c91ae79498769d193f70da826" gracePeriod=2 Jan 23 14:35:26 crc kubenswrapper[4775]: I0123 14:35:26.985409 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nmtbr" Jan 23 14:35:27 crc kubenswrapper[4775]: I0123 14:35:27.044710 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt6gq\" (UniqueName: \"kubernetes.io/projected/0fa87919-c37c-422f-8c5d-f5f54162a229-kube-api-access-vt6gq\") pod \"0fa87919-c37c-422f-8c5d-f5f54162a229\" (UID: \"0fa87919-c37c-422f-8c5d-f5f54162a229\") " Jan 23 14:35:27 crc kubenswrapper[4775]: I0123 14:35:27.044765 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fa87919-c37c-422f-8c5d-f5f54162a229-catalog-content\") pod \"0fa87919-c37c-422f-8c5d-f5f54162a229\" (UID: \"0fa87919-c37c-422f-8c5d-f5f54162a229\") " Jan 23 14:35:27 crc kubenswrapper[4775]: I0123 14:35:27.044893 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fa87919-c37c-422f-8c5d-f5f54162a229-utilities\") pod \"0fa87919-c37c-422f-8c5d-f5f54162a229\" (UID: \"0fa87919-c37c-422f-8c5d-f5f54162a229\") " Jan 23 14:35:27 crc kubenswrapper[4775]: I0123 14:35:27.046549 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fa87919-c37c-422f-8c5d-f5f54162a229-utilities" (OuterVolumeSpecName: "utilities") pod "0fa87919-c37c-422f-8c5d-f5f54162a229" (UID: "0fa87919-c37c-422f-8c5d-f5f54162a229"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:35:27 crc kubenswrapper[4775]: I0123 14:35:27.052954 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fa87919-c37c-422f-8c5d-f5f54162a229-kube-api-access-vt6gq" (OuterVolumeSpecName: "kube-api-access-vt6gq") pod "0fa87919-c37c-422f-8c5d-f5f54162a229" (UID: "0fa87919-c37c-422f-8c5d-f5f54162a229"). InnerVolumeSpecName "kube-api-access-vt6gq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:27 crc kubenswrapper[4775]: I0123 14:35:27.147198 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fa87919-c37c-422f-8c5d-f5f54162a229-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0fa87919-c37c-422f-8c5d-f5f54162a229" (UID: "0fa87919-c37c-422f-8c5d-f5f54162a229"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:35:27 crc kubenswrapper[4775]: I0123 14:35:27.148113 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt6gq\" (UniqueName: \"kubernetes.io/projected/0fa87919-c37c-422f-8c5d-f5f54162a229-kube-api-access-vt6gq\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:27 crc kubenswrapper[4775]: I0123 14:35:27.148189 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fa87919-c37c-422f-8c5d-f5f54162a229-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:27 crc kubenswrapper[4775]: I0123 14:35:27.148208 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fa87919-c37c-422f-8c5d-f5f54162a229-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:27 crc kubenswrapper[4775]: I0123 14:35:27.842359 4775 generic.go:334] "Generic (PLEG): container finished" podID="d91e4cde-f59f-4bc9-9f11-bc05386b065c" containerID="046c1051d3c02cded54b9aeb6c0f3033ce2b334c91ae79498769d193f70da826" exitCode=0 Jan 23 14:35:27 crc kubenswrapper[4775]: I0123 14:35:27.842418 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzsg7" event={"ID":"d91e4cde-f59f-4bc9-9f11-bc05386b065c","Type":"ContainerDied","Data":"046c1051d3c02cded54b9aeb6c0f3033ce2b334c91ae79498769d193f70da826"} Jan 23 14:35:27 crc kubenswrapper[4775]: I0123 14:35:27.853353 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nmtbr" event={"ID":"0fa87919-c37c-422f-8c5d-f5f54162a229","Type":"ContainerDied","Data":"fd675cafec4d98add23e159f65f402c9edc343315ba027b2b7ac636cb9573a20"} Jan 23 14:35:27 crc kubenswrapper[4775]: I0123 14:35:27.853458 4775 scope.go:117] "RemoveContainer" containerID="9c3b807319ac23515db33902dbe750669bfbce758abed195bb2690280ffd34b0" Jan 23 14:35:27 crc kubenswrapper[4775]: I0123 14:35:27.853955 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nmtbr" Jan 23 14:35:27 crc kubenswrapper[4775]: I0123 14:35:27.883609 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nmtbr"] Jan 23 14:35:27 crc kubenswrapper[4775]: I0123 14:35:27.886426 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nmtbr"] Jan 23 14:35:27 crc kubenswrapper[4775]: I0123 14:35:27.907062 4775 scope.go:117] "RemoveContainer" containerID="2342482df363e816214bfa63cd48acba94e6573b39d29a37fe7dd668f947c7ec" Jan 23 14:35:27 crc kubenswrapper[4775]: I0123 14:35:27.999314 4775 scope.go:117] "RemoveContainer" containerID="9fd15c51163da7b67d0215d00ae11cb461ae55a7ced3abddf9afbf2b1caac92d" Jan 23 14:35:28 crc kubenswrapper[4775]: I0123 14:35:28.013935 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kzsg7" Jan 23 14:35:28 crc kubenswrapper[4775]: I0123 14:35:28.069755 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5csb\" (UniqueName: \"kubernetes.io/projected/d91e4cde-f59f-4bc9-9f11-bc05386b065c-kube-api-access-b5csb\") pod \"d91e4cde-f59f-4bc9-9f11-bc05386b065c\" (UID: \"d91e4cde-f59f-4bc9-9f11-bc05386b065c\") " Jan 23 14:35:28 crc kubenswrapper[4775]: I0123 14:35:28.069978 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d91e4cde-f59f-4bc9-9f11-bc05386b065c-utilities\") pod \"d91e4cde-f59f-4bc9-9f11-bc05386b065c\" (UID: \"d91e4cde-f59f-4bc9-9f11-bc05386b065c\") " Jan 23 14:35:28 crc kubenswrapper[4775]: I0123 14:35:28.070069 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d91e4cde-f59f-4bc9-9f11-bc05386b065c-catalog-content\") pod \"d91e4cde-f59f-4bc9-9f11-bc05386b065c\" (UID: \"d91e4cde-f59f-4bc9-9f11-bc05386b065c\") " Jan 23 14:35:28 crc kubenswrapper[4775]: I0123 14:35:28.072187 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d91e4cde-f59f-4bc9-9f11-bc05386b065c-utilities" (OuterVolumeSpecName: "utilities") pod "d91e4cde-f59f-4bc9-9f11-bc05386b065c" (UID: "d91e4cde-f59f-4bc9-9f11-bc05386b065c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:35:28 crc kubenswrapper[4775]: I0123 14:35:28.073783 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d91e4cde-f59f-4bc9-9f11-bc05386b065c-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:28 crc kubenswrapper[4775]: I0123 14:35:28.074039 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d91e4cde-f59f-4bc9-9f11-bc05386b065c-kube-api-access-b5csb" (OuterVolumeSpecName: "kube-api-access-b5csb") pod "d91e4cde-f59f-4bc9-9f11-bc05386b065c" (UID: "d91e4cde-f59f-4bc9-9f11-bc05386b065c"). InnerVolumeSpecName "kube-api-access-b5csb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:28 crc kubenswrapper[4775]: I0123 14:35:28.102008 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d91e4cde-f59f-4bc9-9f11-bc05386b065c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d91e4cde-f59f-4bc9-9f11-bc05386b065c" (UID: "d91e4cde-f59f-4bc9-9f11-bc05386b065c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:35:28 crc kubenswrapper[4775]: I0123 14:35:28.176036 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d91e4cde-f59f-4bc9-9f11-bc05386b065c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:28 crc kubenswrapper[4775]: I0123 14:35:28.176094 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5csb\" (UniqueName: \"kubernetes.io/projected/d91e4cde-f59f-4bc9-9f11-bc05386b065c-kube-api-access-b5csb\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:28 crc kubenswrapper[4775]: I0123 14:35:28.868059 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kzsg7" event={"ID":"d91e4cde-f59f-4bc9-9f11-bc05386b065c","Type":"ContainerDied","Data":"3b068f6ecde903e50cee9692d31810d6118d53fd385280961985c877c641c0f9"} Jan 23 14:35:28 crc kubenswrapper[4775]: I0123 14:35:28.868125 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kzsg7" Jan 23 14:35:28 crc kubenswrapper[4775]: I0123 14:35:28.868163 4775 scope.go:117] "RemoveContainer" containerID="046c1051d3c02cded54b9aeb6c0f3033ce2b334c91ae79498769d193f70da826" Jan 23 14:35:28 crc kubenswrapper[4775]: I0123 14:35:28.906075 4775 scope.go:117] "RemoveContainer" containerID="0f16ae8937f5fb70cd5743359a2f7a31c4f3df2c152d44a3cc32f6e7bc378055" Jan 23 14:35:28 crc kubenswrapper[4775]: I0123 14:35:28.931721 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzsg7"] Jan 23 14:35:28 crc kubenswrapper[4775]: I0123 14:35:28.941203 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kzsg7"] Jan 23 14:35:28 crc kubenswrapper[4775]: I0123 14:35:28.943701 4775 scope.go:117] "RemoveContainer" containerID="ada03641fce0fa691409ed399e7d688cfdebf997e0d324b6a8ee7ed3d292e94c" Jan 23 14:35:29 crc kubenswrapper[4775]: I0123 14:35:29.729943 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fa87919-c37c-422f-8c5d-f5f54162a229" path="/var/lib/kubelet/pods/0fa87919-c37c-422f-8c5d-f5f54162a229/volumes" Jan 23 14:35:29 crc kubenswrapper[4775]: I0123 14:35:29.731520 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d91e4cde-f59f-4bc9-9f11-bc05386b065c" path="/var/lib/kubelet/pods/d91e4cde-f59f-4bc9-9f11-bc05386b065c/volumes" Jan 23 14:35:29 crc kubenswrapper[4775]: I0123 14:35:29.943444 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:29 crc kubenswrapper[4775]: I0123 14:35:29.943892 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:29 crc kubenswrapper[4775]: I0123 14:35:29.944993 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:29 crc kubenswrapper[4775]: I0123 14:35:29.945086 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:29 crc kubenswrapper[4775]: I0123 14:35:29.950411 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:29 crc kubenswrapper[4775]: I0123 14:35:29.953310 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:35 crc kubenswrapper[4775]: I0123 14:35:35.714029 4775 scope.go:117] "RemoveContainer" containerID="fb284da39186f2ab9d4d50e0c08df4cb63745374c070a74a4239a3a6536ab15f" Jan 23 14:35:36 crc kubenswrapper[4775]: I0123 14:35:36.976257 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"bde4903d-4224-4139-a444-3c5baf78ff7b","Type":"ContainerStarted","Data":"0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76"} Jan 23 14:35:36 crc kubenswrapper[4775]: I0123 14:35:36.977609 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:35:37 crc kubenswrapper[4775]: I0123 14:35:37.032627 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.273619 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25"] Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.285978 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-bvq25"] Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.296610 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df"] Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.305028 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2"] Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.318360 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-kmnk2"] Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.330379 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-host-discover-qb9df"] Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.335904 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.388634 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.388935 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="8a2eb109-bc5d-4ce5-af46-d5596b98b4e4" containerName="nova-kuttl-metadata-log" containerID="cri-o://75614d1831bbac5592105e5265508722336cc15ee6a181f2f54c134aec1aa13b" gracePeriod=30 Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.389384 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="8a2eb109-bc5d-4ce5-af46-d5596b98b4e4" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://097b2364f83440a3132b6cb79cdb472334da74927128439c671d4d99b0398fa9" gracePeriod=30 Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.403026 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novacell11814-account-delete-f4rp4"] Jan 23 14:35:38 crc kubenswrapper[4775]: E0123 14:35:38.403380 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4561aa6c-c92c-4005-8587-a8367a331257" containerName="extract-utilities" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.403392 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="4561aa6c-c92c-4005-8587-a8367a331257" containerName="extract-utilities" Jan 23 14:35:38 crc kubenswrapper[4775]: E0123 14:35:38.403412 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d91e4cde-f59f-4bc9-9f11-bc05386b065c" containerName="extract-content" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.403418 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="d91e4cde-f59f-4bc9-9f11-bc05386b065c" containerName="extract-content" Jan 23 14:35:38 crc kubenswrapper[4775]: E0123 14:35:38.403428 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4561aa6c-c92c-4005-8587-a8367a331257" containerName="extract-content" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.403434 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="4561aa6c-c92c-4005-8587-a8367a331257" containerName="extract-content" Jan 23 14:35:38 crc kubenswrapper[4775]: E0123 14:35:38.403445 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fa87919-c37c-422f-8c5d-f5f54162a229" containerName="registry-server" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.403451 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fa87919-c37c-422f-8c5d-f5f54162a229" containerName="registry-server" Jan 23 14:35:38 crc kubenswrapper[4775]: E0123 14:35:38.403459 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d91e4cde-f59f-4bc9-9f11-bc05386b065c" containerName="registry-server" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.403464 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="d91e4cde-f59f-4bc9-9f11-bc05386b065c" containerName="registry-server" Jan 23 14:35:38 crc kubenswrapper[4775]: E0123 14:35:38.403473 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d91e4cde-f59f-4bc9-9f11-bc05386b065c" containerName="extract-utilities" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.403478 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="d91e4cde-f59f-4bc9-9f11-bc05386b065c" containerName="extract-utilities" Jan 23 14:35:38 crc kubenswrapper[4775]: E0123 14:35:38.403490 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fa87919-c37c-422f-8c5d-f5f54162a229" containerName="extract-utilities" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.403495 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fa87919-c37c-422f-8c5d-f5f54162a229" containerName="extract-utilities" Jan 23 14:35:38 crc kubenswrapper[4775]: E0123 14:35:38.403503 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fa87919-c37c-422f-8c5d-f5f54162a229" containerName="extract-content" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.403510 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fa87919-c37c-422f-8c5d-f5f54162a229" containerName="extract-content" Jan 23 14:35:38 crc kubenswrapper[4775]: E0123 14:35:38.403522 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4561aa6c-c92c-4005-8587-a8367a331257" containerName="registry-server" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.403529 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="4561aa6c-c92c-4005-8587-a8367a331257" containerName="registry-server" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.403718 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fa87919-c37c-422f-8c5d-f5f54162a229" containerName="registry-server" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.403729 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="4561aa6c-c92c-4005-8587-a8367a331257" containerName="registry-server" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.403749 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="d91e4cde-f59f-4bc9-9f11-bc05386b065c" containerName="registry-server" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.404299 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell11814-account-delete-f4rp4" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.412170 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell11814-account-delete-f4rp4"] Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.486666 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d8e2db8-c4c4-48ea-83a1-d750eb6de857-operator-scripts\") pod \"novacell11814-account-delete-f4rp4\" (UID: \"7d8e2db8-c4c4-48ea-83a1-d750eb6de857\") " pod="nova-kuttl-default/novacell11814-account-delete-f4rp4" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.486740 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l5p5\" (UniqueName: \"kubernetes.io/projected/7d8e2db8-c4c4-48ea-83a1-d750eb6de857-kube-api-access-2l5p5\") pod \"novacell11814-account-delete-f4rp4\" (UID: \"7d8e2db8-c4c4-48ea-83a1-d750eb6de857\") " pod="nova-kuttl-default/novacell11814-account-delete-f4rp4" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.487038 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novaapia3ac-account-delete-gzdbr"] Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.488756 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapia3ac-account-delete-gzdbr" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.494950 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-svgzc"] Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.505406 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.505650 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podUID="8dc76b90-669a-4df4-a976-1199443a8f55" containerName="nova-kuttl-cell1-conductor-conductor" containerID="cri-o://d399db7d10a3f96ad48455263cc1a2f5c347077b872b70479e4c6c0cf205a7d5" gracePeriod=30 Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.512004 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-svgzc"] Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.522947 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novaapia3ac-account-delete-gzdbr"] Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.580947 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.581297 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="053e93b4-4f28-478d-9065-20980afe9e20" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://ab22bfbf9613e4952570f4b58b9dfa2a5876ac2a81bea5c917b73f18bda88cfe" gracePeriod=30 Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.587688 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/216ac3cd-4e4b-40b9-b05d-be15cfe121ed-operator-scripts\") pod \"novaapia3ac-account-delete-gzdbr\" (UID: \"216ac3cd-4e4b-40b9-b05d-be15cfe121ed\") " pod="nova-kuttl-default/novaapia3ac-account-delete-gzdbr" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.587730 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d8e2db8-c4c4-48ea-83a1-d750eb6de857-operator-scripts\") pod \"novacell11814-account-delete-f4rp4\" (UID: \"7d8e2db8-c4c4-48ea-83a1-d750eb6de857\") " pod="nova-kuttl-default/novacell11814-account-delete-f4rp4" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.587765 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l5p5\" (UniqueName: \"kubernetes.io/projected/7d8e2db8-c4c4-48ea-83a1-d750eb6de857-kube-api-access-2l5p5\") pod \"novacell11814-account-delete-f4rp4\" (UID: \"7d8e2db8-c4c4-48ea-83a1-d750eb6de857\") " pod="nova-kuttl-default/novacell11814-account-delete-f4rp4" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.587846 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw59g\" (UniqueName: \"kubernetes.io/projected/216ac3cd-4e4b-40b9-b05d-be15cfe121ed-kube-api-access-kw59g\") pod \"novaapia3ac-account-delete-gzdbr\" (UID: \"216ac3cd-4e4b-40b9-b05d-be15cfe121ed\") " pod="nova-kuttl-default/novaapia3ac-account-delete-gzdbr" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.594451 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d8e2db8-c4c4-48ea-83a1-d750eb6de857-operator-scripts\") pod \"novacell11814-account-delete-f4rp4\" (UID: \"7d8e2db8-c4c4-48ea-83a1-d750eb6de857\") " pod="nova-kuttl-default/novacell11814-account-delete-f4rp4" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.608096 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.608400 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="93ee5e49-16f0-402a-9d8e-6f237110e663" containerName="nova-kuttl-api-log" containerID="cri-o://aa0a614b45a14d37314ee88b48d9cdfd5a2ac59674285aa0bcd8f730765f5458" gracePeriod=30 Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.611372 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="93ee5e49-16f0-402a-9d8e-6f237110e663" containerName="nova-kuttl-api-api" containerID="cri-o://5a0c9d73c99e74b57defba56af031189ee12f4eb97f9a8df2f62a83574ffa9a2" gracePeriod=30 Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.637586 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.637977 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podUID="f1b9dee7-4afa-4bdc-88fc-f610d0bca84d" containerName="nova-kuttl-cell0-conductor-conductor" containerID="cri-o://575370c07292e3d956d8a0e40335b6219090d6e10fbe3d288c76deb77fcfe67e" gracePeriod=30 Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.641261 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l5p5\" (UniqueName: \"kubernetes.io/projected/7d8e2db8-c4c4-48ea-83a1-d750eb6de857-kube-api-access-2l5p5\") pod \"novacell11814-account-delete-f4rp4\" (UID: \"7d8e2db8-c4c4-48ea-83a1-d750eb6de857\") " pod="nova-kuttl-default/novacell11814-account-delete-f4rp4" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.666752 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855"] Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.684055 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/novacell04dcc-account-delete-bljlz"] Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.685216 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell04dcc-account-delete-bljlz" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.689864 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw59g\" (UniqueName: \"kubernetes.io/projected/216ac3cd-4e4b-40b9-b05d-be15cfe121ed-kube-api-access-kw59g\") pod \"novaapia3ac-account-delete-gzdbr\" (UID: \"216ac3cd-4e4b-40b9-b05d-be15cfe121ed\") " pod="nova-kuttl-default/novaapia3ac-account-delete-gzdbr" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.689934 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6g9q\" (UniqueName: \"kubernetes.io/projected/bc1717a4-664a-4a44-9206-0b5c472cbd50-kube-api-access-d6g9q\") pod \"novacell04dcc-account-delete-bljlz\" (UID: \"bc1717a4-664a-4a44-9206-0b5c472cbd50\") " pod="nova-kuttl-default/novacell04dcc-account-delete-bljlz" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.690010 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/216ac3cd-4e4b-40b9-b05d-be15cfe121ed-operator-scripts\") pod \"novaapia3ac-account-delete-gzdbr\" (UID: \"216ac3cd-4e4b-40b9-b05d-be15cfe121ed\") " pod="nova-kuttl-default/novaapia3ac-account-delete-gzdbr" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.690039 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc1717a4-664a-4a44-9206-0b5c472cbd50-operator-scripts\") pod \"novacell04dcc-account-delete-bljlz\" (UID: \"bc1717a4-664a-4a44-9206-0b5c472cbd50\") " pod="nova-kuttl-default/novacell04dcc-account-delete-bljlz" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.691352 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/216ac3cd-4e4b-40b9-b05d-be15cfe121ed-operator-scripts\") pod \"novaapia3ac-account-delete-gzdbr\" (UID: \"216ac3cd-4e4b-40b9-b05d-be15cfe121ed\") " pod="nova-kuttl-default/novaapia3ac-account-delete-gzdbr" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.702258 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-hr855"] Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.708420 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw59g\" (UniqueName: \"kubernetes.io/projected/216ac3cd-4e4b-40b9-b05d-be15cfe121ed-kube-api-access-kw59g\") pod \"novaapia3ac-account-delete-gzdbr\" (UID: \"216ac3cd-4e4b-40b9-b05d-be15cfe121ed\") " pod="nova-kuttl-default/novaapia3ac-account-delete-gzdbr" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.710996 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell04dcc-account-delete-bljlz"] Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.716896 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.717298 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podUID="51e63565-a2ef-4d12-af2f-f3dc6c2942d9" containerName="nova-kuttl-cell1-novncproxy-novncproxy" containerID="cri-o://adde5b85c57c8932f4247945dfd19a8b18268f554e69fc71d0caf9b3c97cbb35" gracePeriod=30 Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.743054 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell11814-account-delete-f4rp4" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.791415 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc1717a4-664a-4a44-9206-0b5c472cbd50-operator-scripts\") pod \"novacell04dcc-account-delete-bljlz\" (UID: \"bc1717a4-664a-4a44-9206-0b5c472cbd50\") " pod="nova-kuttl-default/novacell04dcc-account-delete-bljlz" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.791577 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6g9q\" (UniqueName: \"kubernetes.io/projected/bc1717a4-664a-4a44-9206-0b5c472cbd50-kube-api-access-d6g9q\") pod \"novacell04dcc-account-delete-bljlz\" (UID: \"bc1717a4-664a-4a44-9206-0b5c472cbd50\") " pod="nova-kuttl-default/novacell04dcc-account-delete-bljlz" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.792493 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc1717a4-664a-4a44-9206-0b5c472cbd50-operator-scripts\") pod \"novacell04dcc-account-delete-bljlz\" (UID: \"bc1717a4-664a-4a44-9206-0b5c472cbd50\") " pod="nova-kuttl-default/novacell04dcc-account-delete-bljlz" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.809407 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6g9q\" (UniqueName: \"kubernetes.io/projected/bc1717a4-664a-4a44-9206-0b5c472cbd50-kube-api-access-d6g9q\") pod \"novacell04dcc-account-delete-bljlz\" (UID: \"bc1717a4-664a-4a44-9206-0b5c472cbd50\") " pod="nova-kuttl-default/novacell04dcc-account-delete-bljlz" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.848108 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapia3ac-account-delete-gzdbr" Jan 23 14:35:38 crc kubenswrapper[4775]: E0123 14:35:38.891581 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d399db7d10a3f96ad48455263cc1a2f5c347077b872b70479e4c6c0cf205a7d5" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 23 14:35:38 crc kubenswrapper[4775]: E0123 14:35:38.895860 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d399db7d10a3f96ad48455263cc1a2f5c347077b872b70479e4c6c0cf205a7d5" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 23 14:35:38 crc kubenswrapper[4775]: E0123 14:35:38.901332 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d399db7d10a3f96ad48455263cc1a2f5c347077b872b70479e4c6c0cf205a7d5" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 23 14:35:38 crc kubenswrapper[4775]: E0123 14:35:38.901402 4775 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podUID="8dc76b90-669a-4df4-a976-1199443a8f55" containerName="nova-kuttl-cell1-conductor-conductor" Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.995278 4775 generic.go:334] "Generic (PLEG): container finished" podID="93ee5e49-16f0-402a-9d8e-6f237110e663" containerID="aa0a614b45a14d37314ee88b48d9cdfd5a2ac59674285aa0bcd8f730765f5458" exitCode=143 Jan 23 14:35:38 crc kubenswrapper[4775]: I0123 14:35:38.995360 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"93ee5e49-16f0-402a-9d8e-6f237110e663","Type":"ContainerDied","Data":"aa0a614b45a14d37314ee88b48d9cdfd5a2ac59674285aa0bcd8f730765f5458"} Jan 23 14:35:39 crc kubenswrapper[4775]: I0123 14:35:38.999451 4775 generic.go:334] "Generic (PLEG): container finished" podID="8a2eb109-bc5d-4ce5-af46-d5596b98b4e4" containerID="75614d1831bbac5592105e5265508722336cc15ee6a181f2f54c134aec1aa13b" exitCode=143 Jan 23 14:35:39 crc kubenswrapper[4775]: I0123 14:35:38.999492 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"8a2eb109-bc5d-4ce5-af46-d5596b98b4e4","Type":"ContainerDied","Data":"75614d1831bbac5592105e5265508722336cc15ee6a181f2f54c134aec1aa13b"} Jan 23 14:35:39 crc kubenswrapper[4775]: I0123 14:35:38.999852 4775 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" secret="" err="secret \"nova-nova-kuttl-dockercfg-289sx\" not found" Jan 23 14:35:39 crc kubenswrapper[4775]: I0123 14:35:39.004757 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell04dcc-account-delete-bljlz" Jan 23 14:35:39 crc kubenswrapper[4775]: E0123 14:35:39.096521 4775 secret.go:188] Couldn't get secret nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-config-data: secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 23 14:35:39 crc kubenswrapper[4775]: E0123 14:35:39.096569 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bde4903d-4224-4139-a444-3c5baf78ff7b-config-data podName:bde4903d-4224-4139-a444-3c5baf78ff7b nodeName:}" failed. No retries permitted until 2026-01-23 14:35:39.596554805 +0000 UTC m=+1886.591383545 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/bde4903d-4224-4139-a444-3c5baf78ff7b-config-data") pod "nova-kuttl-cell1-compute-fake1-compute-0" (UID: "bde4903d-4224-4139-a444-3c5baf78ff7b") : secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 23 14:35:39 crc kubenswrapper[4775]: I0123 14:35:39.248761 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell11814-account-delete-f4rp4"] Jan 23 14:35:39 crc kubenswrapper[4775]: I0123 14:35:39.325789 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novaapia3ac-account-delete-gzdbr"] Jan 23 14:35:39 crc kubenswrapper[4775]: I0123 14:35:39.484206 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:35:39 crc kubenswrapper[4775]: I0123 14:35:39.488724 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/novacell04dcc-account-delete-bljlz"] Jan 23 14:35:39 crc kubenswrapper[4775]: W0123 14:35:39.489147 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc1717a4_664a_4a44_9206_0b5c472cbd50.slice/crio-493d8cbf75b60552f0b7eabb631b7ee805b747cfaa54a588d934ac6f11e2c848 WatchSource:0}: Error finding container 493d8cbf75b60552f0b7eabb631b7ee805b747cfaa54a588d934ac6f11e2c848: Status 404 returned error can't find the container with id 493d8cbf75b60552f0b7eabb631b7ee805b747cfaa54a588d934ac6f11e2c848 Jan 23 14:35:39 crc kubenswrapper[4775]: I0123 14:35:39.613298 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51e63565-a2ef-4d12-af2f-f3dc6c2942d9-config-data\") pod \"51e63565-a2ef-4d12-af2f-f3dc6c2942d9\" (UID: \"51e63565-a2ef-4d12-af2f-f3dc6c2942d9\") " Jan 23 14:35:39 crc kubenswrapper[4775]: I0123 14:35:39.613505 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64gdf\" (UniqueName: \"kubernetes.io/projected/51e63565-a2ef-4d12-af2f-f3dc6c2942d9-kube-api-access-64gdf\") pod \"51e63565-a2ef-4d12-af2f-f3dc6c2942d9\" (UID: \"51e63565-a2ef-4d12-af2f-f3dc6c2942d9\") " Jan 23 14:35:39 crc kubenswrapper[4775]: E0123 14:35:39.613948 4775 secret.go:188] Couldn't get secret nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-config-data: secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 23 14:35:39 crc kubenswrapper[4775]: E0123 14:35:39.614000 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bde4903d-4224-4139-a444-3c5baf78ff7b-config-data podName:bde4903d-4224-4139-a444-3c5baf78ff7b nodeName:}" failed. No retries permitted until 2026-01-23 14:35:40.613985846 +0000 UTC m=+1887.608814586 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/bde4903d-4224-4139-a444-3c5baf78ff7b-config-data") pod "nova-kuttl-cell1-compute-fake1-compute-0" (UID: "bde4903d-4224-4139-a444-3c5baf78ff7b") : secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 23 14:35:39 crc kubenswrapper[4775]: I0123 14:35:39.621241 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51e63565-a2ef-4d12-af2f-f3dc6c2942d9-kube-api-access-64gdf" (OuterVolumeSpecName: "kube-api-access-64gdf") pod "51e63565-a2ef-4d12-af2f-f3dc6c2942d9" (UID: "51e63565-a2ef-4d12-af2f-f3dc6c2942d9"). InnerVolumeSpecName "kube-api-access-64gdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:39 crc kubenswrapper[4775]: I0123 14:35:39.641956 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51e63565-a2ef-4d12-af2f-f3dc6c2942d9-config-data" (OuterVolumeSpecName: "config-data") pod "51e63565-a2ef-4d12-af2f-f3dc6c2942d9" (UID: "51e63565-a2ef-4d12-af2f-f3dc6c2942d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:35:39 crc kubenswrapper[4775]: I0123 14:35:39.715503 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64gdf\" (UniqueName: \"kubernetes.io/projected/51e63565-a2ef-4d12-af2f-f3dc6c2942d9-kube-api-access-64gdf\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:39 crc kubenswrapper[4775]: I0123 14:35:39.715855 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51e63565-a2ef-4d12-af2f-f3dc6c2942d9-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:39 crc kubenswrapper[4775]: I0123 14:35:39.725946 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="004165d0-70f3-4e04-8f77-1342a98147bb" path="/var/lib/kubelet/pods/004165d0-70f3-4e04-8f77-1342a98147bb/volumes" Jan 23 14:35:39 crc kubenswrapper[4775]: I0123 14:35:39.726723 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71a6469b-2bd1-4004-9a3d-c9d87161efab" path="/var/lib/kubelet/pods/71a6469b-2bd1-4004-9a3d-c9d87161efab/volumes" Jan 23 14:35:39 crc kubenswrapper[4775]: I0123 14:35:39.727517 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db75fd7c-ba91-4090-ac20-0009c06598f3" path="/var/lib/kubelet/pods/db75fd7c-ba91-4090-ac20-0009c06598f3/volumes" Jan 23 14:35:39 crc kubenswrapper[4775]: I0123 14:35:39.728251 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec6263e3-855a-48e5-ae77-25462d7e5a13" path="/var/lib/kubelet/pods/ec6263e3-855a-48e5-ae77-25462d7e5a13/volumes" Jan 23 14:35:39 crc kubenswrapper[4775]: I0123 14:35:39.730526 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f25e3b63-3402-4d38-8f18-e4f015797854" path="/var/lib/kubelet/pods/f25e3b63-3402-4d38-8f18-e4f015797854/volumes" Jan 23 14:35:39 crc kubenswrapper[4775]: I0123 14:35:39.947073 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:35:39 crc kubenswrapper[4775]: E0123 14:35:39.968047 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="575370c07292e3d956d8a0e40335b6219090d6e10fbe3d288c76deb77fcfe67e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 23 14:35:39 crc kubenswrapper[4775]: E0123 14:35:39.978177 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="575370c07292e3d956d8a0e40335b6219090d6e10fbe3d288c76deb77fcfe67e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 23 14:35:39 crc kubenswrapper[4775]: E0123 14:35:39.981897 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="575370c07292e3d956d8a0e40335b6219090d6e10fbe3d288c76deb77fcfe67e" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 23 14:35:39 crc kubenswrapper[4775]: E0123 14:35:39.981944 4775 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podUID="f1b9dee7-4afa-4bdc-88fc-f610d0bca84d" containerName="nova-kuttl-cell0-conductor-conductor" Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.008864 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell04dcc-account-delete-bljlz" event={"ID":"bc1717a4-664a-4a44-9206-0b5c472cbd50","Type":"ContainerStarted","Data":"dfda1a9e78a513115b2113a2fcaec48ff69d5be5bceff17b19195b09fc695118"} Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.008910 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell04dcc-account-delete-bljlz" event={"ID":"bc1717a4-664a-4a44-9206-0b5c472cbd50","Type":"ContainerStarted","Data":"493d8cbf75b60552f0b7eabb631b7ee805b747cfaa54a588d934ac6f11e2c848"} Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.011432 4775 generic.go:334] "Generic (PLEG): container finished" podID="51e63565-a2ef-4d12-af2f-f3dc6c2942d9" containerID="adde5b85c57c8932f4247945dfd19a8b18268f554e69fc71d0caf9b3c97cbb35" exitCode=0 Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.011485 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"51e63565-a2ef-4d12-af2f-f3dc6c2942d9","Type":"ContainerDied","Data":"adde5b85c57c8932f4247945dfd19a8b18268f554e69fc71d0caf9b3c97cbb35"} Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.011505 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"51e63565-a2ef-4d12-af2f-f3dc6c2942d9","Type":"ContainerDied","Data":"176dcff14ce2e75b9b75fea74f3c3fe40830311cc826cb992f71f0968d9bd274"} Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.011524 4775 scope.go:117] "RemoveContainer" containerID="adde5b85c57c8932f4247945dfd19a8b18268f554e69fc71d0caf9b3c97cbb35" Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.011629 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.015384 4775 generic.go:334] "Generic (PLEG): container finished" podID="216ac3cd-4e4b-40b9-b05d-be15cfe121ed" containerID="c66c6806d40d02d59cb9c150734f4cbd3c4f3513f91224480738c9614deade7b" exitCode=0 Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.015484 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapia3ac-account-delete-gzdbr" event={"ID":"216ac3cd-4e4b-40b9-b05d-be15cfe121ed","Type":"ContainerDied","Data":"c66c6806d40d02d59cb9c150734f4cbd3c4f3513f91224480738c9614deade7b"} Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.015540 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapia3ac-account-delete-gzdbr" event={"ID":"216ac3cd-4e4b-40b9-b05d-be15cfe121ed","Type":"ContainerStarted","Data":"f8ac7a707989f4b704ed34110c6cbeac748ed5bc390cc7fe8e9c1bd9c862dadb"} Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.017311 4775 generic.go:334] "Generic (PLEG): container finished" podID="8dc76b90-669a-4df4-a976-1199443a8f55" containerID="d399db7d10a3f96ad48455263cc1a2f5c347077b872b70479e4c6c0cf205a7d5" exitCode=0 Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.017367 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"8dc76b90-669a-4df4-a976-1199443a8f55","Type":"ContainerDied","Data":"d399db7d10a3f96ad48455263cc1a2f5c347077b872b70479e4c6c0cf205a7d5"} Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.017382 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"8dc76b90-669a-4df4-a976-1199443a8f55","Type":"ContainerDied","Data":"84970fe316cbca495dccd6939de0eed1e17d5dc5945a7756f3a045d8dd58f52a"} Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.017454 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.018622 4775 generic.go:334] "Generic (PLEG): container finished" podID="7d8e2db8-c4c4-48ea-83a1-d750eb6de857" containerID="5adc38c96008a8a594360e5e6bb09c834348a926f5530d7c364ad7b4ca6f9d2b" exitCode=0 Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.018755 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="bde4903d-4224-4139-a444-3c5baf78ff7b" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" containerID="cri-o://0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" gracePeriod=30 Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.018921 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell11814-account-delete-f4rp4" event={"ID":"7d8e2db8-c4c4-48ea-83a1-d750eb6de857","Type":"ContainerDied","Data":"5adc38c96008a8a594360e5e6bb09c834348a926f5530d7c364ad7b4ca6f9d2b"} Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.019080 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell11814-account-delete-f4rp4" event={"ID":"7d8e2db8-c4c4-48ea-83a1-d750eb6de857","Type":"ContainerStarted","Data":"2c60df97c5de83b16f76501e503728db1f38861868acc41b3bbf53e358943ce1"} Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.026779 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/novacell04dcc-account-delete-bljlz" podStartSLOduration=2.026761978 podStartE2EDuration="2.026761978s" podCreationTimestamp="2026-01-23 14:35:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:35:40.021722032 +0000 UTC m=+1887.016550782" watchObservedRunningTime="2026-01-23 14:35:40.026761978 +0000 UTC m=+1887.021590718" Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.036959 4775 scope.go:117] "RemoveContainer" containerID="adde5b85c57c8932f4247945dfd19a8b18268f554e69fc71d0caf9b3c97cbb35" Jan 23 14:35:40 crc kubenswrapper[4775]: E0123 14:35:40.040710 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adde5b85c57c8932f4247945dfd19a8b18268f554e69fc71d0caf9b3c97cbb35\": container with ID starting with adde5b85c57c8932f4247945dfd19a8b18268f554e69fc71d0caf9b3c97cbb35 not found: ID does not exist" containerID="adde5b85c57c8932f4247945dfd19a8b18268f554e69fc71d0caf9b3c97cbb35" Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.040743 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adde5b85c57c8932f4247945dfd19a8b18268f554e69fc71d0caf9b3c97cbb35"} err="failed to get container status \"adde5b85c57c8932f4247945dfd19a8b18268f554e69fc71d0caf9b3c97cbb35\": rpc error: code = NotFound desc = could not find container \"adde5b85c57c8932f4247945dfd19a8b18268f554e69fc71d0caf9b3c97cbb35\": container with ID starting with adde5b85c57c8932f4247945dfd19a8b18268f554e69fc71d0caf9b3c97cbb35 not found: ID does not exist" Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.040772 4775 scope.go:117] "RemoveContainer" containerID="d399db7d10a3f96ad48455263cc1a2f5c347077b872b70479e4c6c0cf205a7d5" Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.049563 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.055679 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.077297 4775 scope.go:117] "RemoveContainer" containerID="d399db7d10a3f96ad48455263cc1a2f5c347077b872b70479e4c6c0cf205a7d5" Jan 23 14:35:40 crc kubenswrapper[4775]: E0123 14:35:40.077850 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d399db7d10a3f96ad48455263cc1a2f5c347077b872b70479e4c6c0cf205a7d5\": container with ID starting with d399db7d10a3f96ad48455263cc1a2f5c347077b872b70479e4c6c0cf205a7d5 not found: ID does not exist" containerID="d399db7d10a3f96ad48455263cc1a2f5c347077b872b70479e4c6c0cf205a7d5" Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.077889 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d399db7d10a3f96ad48455263cc1a2f5c347077b872b70479e4c6c0cf205a7d5"} err="failed to get container status \"d399db7d10a3f96ad48455263cc1a2f5c347077b872b70479e4c6c0cf205a7d5\": rpc error: code = NotFound desc = could not find container \"d399db7d10a3f96ad48455263cc1a2f5c347077b872b70479e4c6c0cf205a7d5\": container with ID starting with d399db7d10a3f96ad48455263cc1a2f5c347077b872b70479e4c6c0cf205a7d5 not found: ID does not exist" Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.123150 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkqlt\" (UniqueName: \"kubernetes.io/projected/8dc76b90-669a-4df4-a976-1199443a8f55-kube-api-access-lkqlt\") pod \"8dc76b90-669a-4df4-a976-1199443a8f55\" (UID: \"8dc76b90-669a-4df4-a976-1199443a8f55\") " Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.123214 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dc76b90-669a-4df4-a976-1199443a8f55-config-data\") pod \"8dc76b90-669a-4df4-a976-1199443a8f55\" (UID: \"8dc76b90-669a-4df4-a976-1199443a8f55\") " Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.128463 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dc76b90-669a-4df4-a976-1199443a8f55-kube-api-access-lkqlt" (OuterVolumeSpecName: "kube-api-access-lkqlt") pod "8dc76b90-669a-4df4-a976-1199443a8f55" (UID: "8dc76b90-669a-4df4-a976-1199443a8f55"). InnerVolumeSpecName "kube-api-access-lkqlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.145104 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dc76b90-669a-4df4-a976-1199443a8f55-config-data" (OuterVolumeSpecName: "config-data") pod "8dc76b90-669a-4df4-a976-1199443a8f55" (UID: "8dc76b90-669a-4df4-a976-1199443a8f55"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.225479 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkqlt\" (UniqueName: \"kubernetes.io/projected/8dc76b90-669a-4df4-a976-1199443a8f55-kube-api-access-lkqlt\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.225543 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8dc76b90-669a-4df4-a976-1199443a8f55-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.379426 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.385029 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.399005 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.530145 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/053e93b4-4f28-478d-9065-20980afe9e20-config-data\") pod \"053e93b4-4f28-478d-9065-20980afe9e20\" (UID: \"053e93b4-4f28-478d-9065-20980afe9e20\") " Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.530300 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvpt6\" (UniqueName: \"kubernetes.io/projected/053e93b4-4f28-478d-9065-20980afe9e20-kube-api-access-vvpt6\") pod \"053e93b4-4f28-478d-9065-20980afe9e20\" (UID: \"053e93b4-4f28-478d-9065-20980afe9e20\") " Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.537016 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/053e93b4-4f28-478d-9065-20980afe9e20-kube-api-access-vvpt6" (OuterVolumeSpecName: "kube-api-access-vvpt6") pod "053e93b4-4f28-478d-9065-20980afe9e20" (UID: "053e93b4-4f28-478d-9065-20980afe9e20"). InnerVolumeSpecName "kube-api-access-vvpt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.574557 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/053e93b4-4f28-478d-9065-20980afe9e20-config-data" (OuterVolumeSpecName: "config-data") pod "053e93b4-4f28-478d-9065-20980afe9e20" (UID: "053e93b4-4f28-478d-9065-20980afe9e20"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.631749 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/053e93b4-4f28-478d-9065-20980afe9e20-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:40 crc kubenswrapper[4775]: I0123 14:35:40.631786 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvpt6\" (UniqueName: \"kubernetes.io/projected/053e93b4-4f28-478d-9065-20980afe9e20-kube-api-access-vvpt6\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:40 crc kubenswrapper[4775]: E0123 14:35:40.632322 4775 secret.go:188] Couldn't get secret nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-config-data: secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 23 14:35:40 crc kubenswrapper[4775]: E0123 14:35:40.632406 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bde4903d-4224-4139-a444-3c5baf78ff7b-config-data podName:bde4903d-4224-4139-a444-3c5baf78ff7b nodeName:}" failed. No retries permitted until 2026-01-23 14:35:42.632385075 +0000 UTC m=+1889.627213815 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/bde4903d-4224-4139-a444-3c5baf78ff7b-config-data") pod "nova-kuttl-cell1-compute-fake1-compute-0" (UID: "bde4903d-4224-4139-a444-3c5baf78ff7b") : secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 23 14:35:40 crc kubenswrapper[4775]: E0123 14:35:40.889212 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 23 14:35:40 crc kubenswrapper[4775]: E0123 14:35:40.891534 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 23 14:35:40 crc kubenswrapper[4775]: E0123 14:35:40.893291 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 23 14:35:40 crc kubenswrapper[4775]: E0123 14:35:40.893510 4775 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="bde4903d-4224-4139-a444-3c5baf78ff7b" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.030312 4775 generic.go:334] "Generic (PLEG): container finished" podID="bc1717a4-664a-4a44-9206-0b5c472cbd50" containerID="dfda1a9e78a513115b2113a2fcaec48ff69d5be5bceff17b19195b09fc695118" exitCode=0 Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.030522 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell04dcc-account-delete-bljlz" event={"ID":"bc1717a4-664a-4a44-9206-0b5c472cbd50","Type":"ContainerDied","Data":"dfda1a9e78a513115b2113a2fcaec48ff69d5be5bceff17b19195b09fc695118"} Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.035701 4775 generic.go:334] "Generic (PLEG): container finished" podID="053e93b4-4f28-478d-9065-20980afe9e20" containerID="ab22bfbf9613e4952570f4b58b9dfa2a5876ac2a81bea5c917b73f18bda88cfe" exitCode=0 Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.035783 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"053e93b4-4f28-478d-9065-20980afe9e20","Type":"ContainerDied","Data":"ab22bfbf9613e4952570f4b58b9dfa2a5876ac2a81bea5c917b73f18bda88cfe"} Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.035851 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"053e93b4-4f28-478d-9065-20980afe9e20","Type":"ContainerDied","Data":"c6208b8557503ef028aa8573339ec1a013f7ed363a4379dec4e0efaa541f0f37"} Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.035877 4775 scope.go:117] "RemoveContainer" containerID="ab22bfbf9613e4952570f4b58b9dfa2a5876ac2a81bea5c917b73f18bda88cfe" Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.036234 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.097523 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.098045 4775 scope.go:117] "RemoveContainer" containerID="ab22bfbf9613e4952570f4b58b9dfa2a5876ac2a81bea5c917b73f18bda88cfe" Jan 23 14:35:41 crc kubenswrapper[4775]: E0123 14:35:41.098488 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab22bfbf9613e4952570f4b58b9dfa2a5876ac2a81bea5c917b73f18bda88cfe\": container with ID starting with ab22bfbf9613e4952570f4b58b9dfa2a5876ac2a81bea5c917b73f18bda88cfe not found: ID does not exist" containerID="ab22bfbf9613e4952570f4b58b9dfa2a5876ac2a81bea5c917b73f18bda88cfe" Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.098535 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab22bfbf9613e4952570f4b58b9dfa2a5876ac2a81bea5c917b73f18bda88cfe"} err="failed to get container status \"ab22bfbf9613e4952570f4b58b9dfa2a5876ac2a81bea5c917b73f18bda88cfe\": rpc error: code = NotFound desc = could not find container \"ab22bfbf9613e4952570f4b58b9dfa2a5876ac2a81bea5c917b73f18bda88cfe\": container with ID starting with ab22bfbf9613e4952570f4b58b9dfa2a5876ac2a81bea5c917b73f18bda88cfe not found: ID does not exist" Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.102746 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.458939 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapia3ac-account-delete-gzdbr" Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.464622 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell11814-account-delete-f4rp4" Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.648500 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kw59g\" (UniqueName: \"kubernetes.io/projected/216ac3cd-4e4b-40b9-b05d-be15cfe121ed-kube-api-access-kw59g\") pod \"216ac3cd-4e4b-40b9-b05d-be15cfe121ed\" (UID: \"216ac3cd-4e4b-40b9-b05d-be15cfe121ed\") " Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.648638 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2l5p5\" (UniqueName: \"kubernetes.io/projected/7d8e2db8-c4c4-48ea-83a1-d750eb6de857-kube-api-access-2l5p5\") pod \"7d8e2db8-c4c4-48ea-83a1-d750eb6de857\" (UID: \"7d8e2db8-c4c4-48ea-83a1-d750eb6de857\") " Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.648720 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/216ac3cd-4e4b-40b9-b05d-be15cfe121ed-operator-scripts\") pod \"216ac3cd-4e4b-40b9-b05d-be15cfe121ed\" (UID: \"216ac3cd-4e4b-40b9-b05d-be15cfe121ed\") " Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.648990 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d8e2db8-c4c4-48ea-83a1-d750eb6de857-operator-scripts\") pod \"7d8e2db8-c4c4-48ea-83a1-d750eb6de857\" (UID: \"7d8e2db8-c4c4-48ea-83a1-d750eb6de857\") " Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.649794 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/216ac3cd-4e4b-40b9-b05d-be15cfe121ed-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "216ac3cd-4e4b-40b9-b05d-be15cfe121ed" (UID: "216ac3cd-4e4b-40b9-b05d-be15cfe121ed"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.650960 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d8e2db8-c4c4-48ea-83a1-d750eb6de857-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7d8e2db8-c4c4-48ea-83a1-d750eb6de857" (UID: "7d8e2db8-c4c4-48ea-83a1-d750eb6de857"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.653992 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d8e2db8-c4c4-48ea-83a1-d750eb6de857-kube-api-access-2l5p5" (OuterVolumeSpecName: "kube-api-access-2l5p5") pod "7d8e2db8-c4c4-48ea-83a1-d750eb6de857" (UID: "7d8e2db8-c4c4-48ea-83a1-d750eb6de857"). InnerVolumeSpecName "kube-api-access-2l5p5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.654727 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/216ac3cd-4e4b-40b9-b05d-be15cfe121ed-kube-api-access-kw59g" (OuterVolumeSpecName: "kube-api-access-kw59g") pod "216ac3cd-4e4b-40b9-b05d-be15cfe121ed" (UID: "216ac3cd-4e4b-40b9-b05d-be15cfe121ed"). InnerVolumeSpecName "kube-api-access-kw59g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.725669 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="053e93b4-4f28-478d-9065-20980afe9e20" path="/var/lib/kubelet/pods/053e93b4-4f28-478d-9065-20980afe9e20/volumes" Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.726470 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51e63565-a2ef-4d12-af2f-f3dc6c2942d9" path="/var/lib/kubelet/pods/51e63565-a2ef-4d12-af2f-f3dc6c2942d9/volumes" Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.727130 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dc76b90-669a-4df4-a976-1199443a8f55" path="/var/lib/kubelet/pods/8dc76b90-669a-4df4-a976-1199443a8f55/volumes" Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.751380 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kw59g\" (UniqueName: \"kubernetes.io/projected/216ac3cd-4e4b-40b9-b05d-be15cfe121ed-kube-api-access-kw59g\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.751423 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2l5p5\" (UniqueName: \"kubernetes.io/projected/7d8e2db8-c4c4-48ea-83a1-d750eb6de857-kube-api-access-2l5p5\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.751477 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/216ac3cd-4e4b-40b9-b05d-be15cfe121ed-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.751495 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d8e2db8-c4c4-48ea-83a1-d750eb6de857-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.786029 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="93ee5e49-16f0-402a-9d8e-6f237110e663" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.202:8774/\": read tcp 10.217.0.2:37162->10.217.0.202:8774: read: connection reset by peer" Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.786100 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="93ee5e49-16f0-402a-9d8e-6f237110e663" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.202:8774/\": read tcp 10.217.0.2:37146->10.217.0.202:8774: read: connection reset by peer" Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.812572 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="8a2eb109-bc5d-4ce5-af46-d5596b98b4e4" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.198:8775/\": read tcp 10.217.0.2:39896->10.217.0.198:8775: read: connection reset by peer" Jan 23 14:35:41 crc kubenswrapper[4775]: I0123 14:35:41.812632 4775 prober.go:107] "Probe failed" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="8a2eb109-bc5d-4ce5-af46-d5596b98b4e4" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.198:8775/\": read tcp 10.217.0.2:39900->10.217.0.198:8775: read: connection reset by peer" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.060204 4775 generic.go:334] "Generic (PLEG): container finished" podID="8a2eb109-bc5d-4ce5-af46-d5596b98b4e4" containerID="097b2364f83440a3132b6cb79cdb472334da74927128439c671d4d99b0398fa9" exitCode=0 Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.060429 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"8a2eb109-bc5d-4ce5-af46-d5596b98b4e4","Type":"ContainerDied","Data":"097b2364f83440a3132b6cb79cdb472334da74927128439c671d4d99b0398fa9"} Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.063757 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novaapia3ac-account-delete-gzdbr" event={"ID":"216ac3cd-4e4b-40b9-b05d-be15cfe121ed","Type":"ContainerDied","Data":"f8ac7a707989f4b704ed34110c6cbeac748ed5bc390cc7fe8e9c1bd9c862dadb"} Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.063792 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8ac7a707989f4b704ed34110c6cbeac748ed5bc390cc7fe8e9c1bd9c862dadb" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.063919 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novaapia3ac-account-delete-gzdbr" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.073446 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell11814-account-delete-f4rp4" event={"ID":"7d8e2db8-c4c4-48ea-83a1-d750eb6de857","Type":"ContainerDied","Data":"2c60df97c5de83b16f76501e503728db1f38861868acc41b3bbf53e358943ce1"} Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.073495 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c60df97c5de83b16f76501e503728db1f38861868acc41b3bbf53e358943ce1" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.073577 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell11814-account-delete-f4rp4" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.082237 4775 generic.go:334] "Generic (PLEG): container finished" podID="93ee5e49-16f0-402a-9d8e-6f237110e663" containerID="5a0c9d73c99e74b57defba56af031189ee12f4eb97f9a8df2f62a83574ffa9a2" exitCode=0 Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.082375 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"93ee5e49-16f0-402a-9d8e-6f237110e663","Type":"ContainerDied","Data":"5a0c9d73c99e74b57defba56af031189ee12f4eb97f9a8df2f62a83574ffa9a2"} Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.251295 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.262569 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.266538 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a2eb109-bc5d-4ce5-af46-d5596b98b4e4-logs\") pod \"8a2eb109-bc5d-4ce5-af46-d5596b98b4e4\" (UID: \"8a2eb109-bc5d-4ce5-af46-d5596b98b4e4\") " Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.272239 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93ee5e49-16f0-402a-9d8e-6f237110e663-config-data\") pod \"93ee5e49-16f0-402a-9d8e-6f237110e663\" (UID: \"93ee5e49-16f0-402a-9d8e-6f237110e663\") " Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.272277 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a2eb109-bc5d-4ce5-af46-d5596b98b4e4-config-data\") pod \"8a2eb109-bc5d-4ce5-af46-d5596b98b4e4\" (UID: \"8a2eb109-bc5d-4ce5-af46-d5596b98b4e4\") " Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.272302 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-js8lh\" (UniqueName: \"kubernetes.io/projected/93ee5e49-16f0-402a-9d8e-6f237110e663-kube-api-access-js8lh\") pod \"93ee5e49-16f0-402a-9d8e-6f237110e663\" (UID: \"93ee5e49-16f0-402a-9d8e-6f237110e663\") " Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.272323 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93ee5e49-16f0-402a-9d8e-6f237110e663-logs\") pod \"93ee5e49-16f0-402a-9d8e-6f237110e663\" (UID: \"93ee5e49-16f0-402a-9d8e-6f237110e663\") " Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.272350 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhnl9\" (UniqueName: \"kubernetes.io/projected/8a2eb109-bc5d-4ce5-af46-d5596b98b4e4-kube-api-access-lhnl9\") pod \"8a2eb109-bc5d-4ce5-af46-d5596b98b4e4\" (UID: \"8a2eb109-bc5d-4ce5-af46-d5596b98b4e4\") " Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.268566 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a2eb109-bc5d-4ce5-af46-d5596b98b4e4-logs" (OuterVolumeSpecName: "logs") pod "8a2eb109-bc5d-4ce5-af46-d5596b98b4e4" (UID: "8a2eb109-bc5d-4ce5-af46-d5596b98b4e4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.272965 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a2eb109-bc5d-4ce5-af46-d5596b98b4e4-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.273227 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93ee5e49-16f0-402a-9d8e-6f237110e663-logs" (OuterVolumeSpecName: "logs") pod "93ee5e49-16f0-402a-9d8e-6f237110e663" (UID: "93ee5e49-16f0-402a-9d8e-6f237110e663"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.278065 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93ee5e49-16f0-402a-9d8e-6f237110e663-kube-api-access-js8lh" (OuterVolumeSpecName: "kube-api-access-js8lh") pod "93ee5e49-16f0-402a-9d8e-6f237110e663" (UID: "93ee5e49-16f0-402a-9d8e-6f237110e663"). InnerVolumeSpecName "kube-api-access-js8lh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.282711 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a2eb109-bc5d-4ce5-af46-d5596b98b4e4-kube-api-access-lhnl9" (OuterVolumeSpecName: "kube-api-access-lhnl9") pod "8a2eb109-bc5d-4ce5-af46-d5596b98b4e4" (UID: "8a2eb109-bc5d-4ce5-af46-d5596b98b4e4"). InnerVolumeSpecName "kube-api-access-lhnl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.305509 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a2eb109-bc5d-4ce5-af46-d5596b98b4e4-config-data" (OuterVolumeSpecName: "config-data") pod "8a2eb109-bc5d-4ce5-af46-d5596b98b4e4" (UID: "8a2eb109-bc5d-4ce5-af46-d5596b98b4e4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.306265 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93ee5e49-16f0-402a-9d8e-6f237110e663-config-data" (OuterVolumeSpecName: "config-data") pod "93ee5e49-16f0-402a-9d8e-6f237110e663" (UID: "93ee5e49-16f0-402a-9d8e-6f237110e663"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.368970 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell04dcc-account-delete-bljlz" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.375297 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93ee5e49-16f0-402a-9d8e-6f237110e663-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.375362 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a2eb109-bc5d-4ce5-af46-d5596b98b4e4-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.375381 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-js8lh\" (UniqueName: \"kubernetes.io/projected/93ee5e49-16f0-402a-9d8e-6f237110e663-kube-api-access-js8lh\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.375394 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93ee5e49-16f0-402a-9d8e-6f237110e663-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.375408 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhnl9\" (UniqueName: \"kubernetes.io/projected/8a2eb109-bc5d-4ce5-af46-d5596b98b4e4-kube-api-access-lhnl9\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.476152 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc1717a4-664a-4a44-9206-0b5c472cbd50-operator-scripts\") pod \"bc1717a4-664a-4a44-9206-0b5c472cbd50\" (UID: \"bc1717a4-664a-4a44-9206-0b5c472cbd50\") " Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.476245 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6g9q\" (UniqueName: \"kubernetes.io/projected/bc1717a4-664a-4a44-9206-0b5c472cbd50-kube-api-access-d6g9q\") pod \"bc1717a4-664a-4a44-9206-0b5c472cbd50\" (UID: \"bc1717a4-664a-4a44-9206-0b5c472cbd50\") " Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.477088 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc1717a4-664a-4a44-9206-0b5c472cbd50-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bc1717a4-664a-4a44-9206-0b5c472cbd50" (UID: "bc1717a4-664a-4a44-9206-0b5c472cbd50"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.479718 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc1717a4-664a-4a44-9206-0b5c472cbd50-kube-api-access-d6g9q" (OuterVolumeSpecName: "kube-api-access-d6g9q") pod "bc1717a4-664a-4a44-9206-0b5c472cbd50" (UID: "bc1717a4-664a-4a44-9206-0b5c472cbd50"). InnerVolumeSpecName "kube-api-access-d6g9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.570342 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.577551 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc1717a4-664a-4a44-9206-0b5c472cbd50-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.577580 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6g9q\" (UniqueName: \"kubernetes.io/projected/bc1717a4-664a-4a44-9206-0b5c472cbd50-kube-api-access-d6g9q\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.678897 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1b9dee7-4afa-4bdc-88fc-f610d0bca84d-config-data\") pod \"f1b9dee7-4afa-4bdc-88fc-f610d0bca84d\" (UID: \"f1b9dee7-4afa-4bdc-88fc-f610d0bca84d\") " Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.679057 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhrnh\" (UniqueName: \"kubernetes.io/projected/f1b9dee7-4afa-4bdc-88fc-f610d0bca84d-kube-api-access-mhrnh\") pod \"f1b9dee7-4afa-4bdc-88fc-f610d0bca84d\" (UID: \"f1b9dee7-4afa-4bdc-88fc-f610d0bca84d\") " Jan 23 14:35:42 crc kubenswrapper[4775]: E0123 14:35:42.679539 4775 secret.go:188] Couldn't get secret nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-config-data: secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 23 14:35:42 crc kubenswrapper[4775]: E0123 14:35:42.679624 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bde4903d-4224-4139-a444-3c5baf78ff7b-config-data podName:bde4903d-4224-4139-a444-3c5baf78ff7b nodeName:}" failed. No retries permitted until 2026-01-23 14:35:46.679602583 +0000 UTC m=+1893.674431323 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/bde4903d-4224-4139-a444-3c5baf78ff7b-config-data") pod "nova-kuttl-cell1-compute-fake1-compute-0" (UID: "bde4903d-4224-4139-a444-3c5baf78ff7b") : secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.692354 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1b9dee7-4afa-4bdc-88fc-f610d0bca84d-kube-api-access-mhrnh" (OuterVolumeSpecName: "kube-api-access-mhrnh") pod "f1b9dee7-4afa-4bdc-88fc-f610d0bca84d" (UID: "f1b9dee7-4afa-4bdc-88fc-f610d0bca84d"). InnerVolumeSpecName "kube-api-access-mhrnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.711379 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1b9dee7-4afa-4bdc-88fc-f610d0bca84d-config-data" (OuterVolumeSpecName: "config-data") pod "f1b9dee7-4afa-4bdc-88fc-f610d0bca84d" (UID: "f1b9dee7-4afa-4bdc-88fc-f610d0bca84d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.780703 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhrnh\" (UniqueName: \"kubernetes.io/projected/f1b9dee7-4afa-4bdc-88fc-f610d0bca84d-kube-api-access-mhrnh\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:42 crc kubenswrapper[4775]: I0123 14:35:42.780743 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1b9dee7-4afa-4bdc-88fc-f610d0bca84d-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.101618 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"93ee5e49-16f0-402a-9d8e-6f237110e663","Type":"ContainerDied","Data":"fadc935d0ca1313694e64e348196ad9cf5ba16ec1ffcb2fcdd1d5a9b83025e52"} Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.102207 4775 scope.go:117] "RemoveContainer" containerID="5a0c9d73c99e74b57defba56af031189ee12f4eb97f9a8df2f62a83574ffa9a2" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.101635 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.104422 4775 generic.go:334] "Generic (PLEG): container finished" podID="f1b9dee7-4afa-4bdc-88fc-f610d0bca84d" containerID="575370c07292e3d956d8a0e40335b6219090d6e10fbe3d288c76deb77fcfe67e" exitCode=0 Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.104506 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.104546 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"f1b9dee7-4afa-4bdc-88fc-f610d0bca84d","Type":"ContainerDied","Data":"575370c07292e3d956d8a0e40335b6219090d6e10fbe3d288c76deb77fcfe67e"} Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.104596 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"f1b9dee7-4afa-4bdc-88fc-f610d0bca84d","Type":"ContainerDied","Data":"02382e22f435f1e3a7c73d28641f54e87db1dd32276e640504ea0f19f830c722"} Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.107836 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"8a2eb109-bc5d-4ce5-af46-d5596b98b4e4","Type":"ContainerDied","Data":"3ef3e20b260f3e98c87c0a0151aead3f8b34244b446f78a1aa8e60eef7375188"} Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.107977 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.112793 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/novacell04dcc-account-delete-bljlz" event={"ID":"bc1717a4-664a-4a44-9206-0b5c472cbd50","Type":"ContainerDied","Data":"493d8cbf75b60552f0b7eabb631b7ee805b747cfaa54a588d934ac6f11e2c848"} Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.112869 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="493d8cbf75b60552f0b7eabb631b7ee805b747cfaa54a588d934ac6f11e2c848" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.112896 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/novacell04dcc-account-delete-bljlz" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.150826 4775 scope.go:117] "RemoveContainer" containerID="aa0a614b45a14d37314ee88b48d9cdfd5a2ac59674285aa0bcd8f730765f5458" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.178719 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.194126 4775 scope.go:117] "RemoveContainer" containerID="575370c07292e3d956d8a0e40335b6219090d6e10fbe3d288c76deb77fcfe67e" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.197788 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.208201 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.219444 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.220433 4775 scope.go:117] "RemoveContainer" containerID="575370c07292e3d956d8a0e40335b6219090d6e10fbe3d288c76deb77fcfe67e" Jan 23 14:35:43 crc kubenswrapper[4775]: E0123 14:35:43.221106 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"575370c07292e3d956d8a0e40335b6219090d6e10fbe3d288c76deb77fcfe67e\": container with ID starting with 575370c07292e3d956d8a0e40335b6219090d6e10fbe3d288c76deb77fcfe67e not found: ID does not exist" containerID="575370c07292e3d956d8a0e40335b6219090d6e10fbe3d288c76deb77fcfe67e" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.221178 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"575370c07292e3d956d8a0e40335b6219090d6e10fbe3d288c76deb77fcfe67e"} err="failed to get container status \"575370c07292e3d956d8a0e40335b6219090d6e10fbe3d288c76deb77fcfe67e\": rpc error: code = NotFound desc = could not find container \"575370c07292e3d956d8a0e40335b6219090d6e10fbe3d288c76deb77fcfe67e\": container with ID starting with 575370c07292e3d956d8a0e40335b6219090d6e10fbe3d288c76deb77fcfe67e not found: ID does not exist" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.221209 4775 scope.go:117] "RemoveContainer" containerID="097b2364f83440a3132b6cb79cdb472334da74927128439c671d4d99b0398fa9" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.227783 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.234789 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.245233 4775 scope.go:117] "RemoveContainer" containerID="75614d1831bbac5592105e5265508722336cc15ee6a181f2f54c134aec1aa13b" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.417573 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-p9ljs"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.422879 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-p9ljs"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.449222 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-1814-account-create-update-nnb6t"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.456473 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-1814-account-create-update-nnb6t"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.462796 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novacell11814-account-delete-f4rp4"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.469551 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novacell11814-account-delete-f4rp4"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.531397 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-db-create-5h6rf"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.550390 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-db-create-5h6rf"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.563382 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novaapia3ac-account-delete-gzdbr"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.572823 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-a3ac-account-create-update-phbcc"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.578379 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novaapia3ac-account-delete-gzdbr"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.584122 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-a3ac-account-create-update-phbcc"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.611957 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-nr9cr"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.620307 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-nr9cr"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.637589 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/novacell04dcc-account-delete-bljlz"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.643082 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-4dcc-account-create-update-7fftw"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.647231 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/novacell04dcc-account-delete-bljlz"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.651535 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-4dcc-account-create-update-7fftw"] Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.728271 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="112204a1-12d6-49b5-b97e-de4daab49dcf" path="/var/lib/kubelet/pods/112204a1-12d6-49b5-b97e-de4daab49dcf/volumes" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.729007 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="216ac3cd-4e4b-40b9-b05d-be15cfe121ed" path="/var/lib/kubelet/pods/216ac3cd-4e4b-40b9-b05d-be15cfe121ed/volumes" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.729668 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c" path="/var/lib/kubelet/pods/42cc7ba0-a8a3-4c4f-8bcd-96dd19bd317c/volumes" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.730339 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d8e2db8-c4c4-48ea-83a1-d750eb6de857" path="/var/lib/kubelet/pods/7d8e2db8-c4c4-48ea-83a1-d750eb6de857/volumes" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.731627 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ff6b200-7364-4e13-956d-628abd48cbaa" path="/var/lib/kubelet/pods/7ff6b200-7364-4e13-956d-628abd48cbaa/volumes" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.732381 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a2eb109-bc5d-4ce5-af46-d5596b98b4e4" path="/var/lib/kubelet/pods/8a2eb109-bc5d-4ce5-af46-d5596b98b4e4/volumes" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.733153 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93ee5e49-16f0-402a-9d8e-6f237110e663" path="/var/lib/kubelet/pods/93ee5e49-16f0-402a-9d8e-6f237110e663/volumes" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.734508 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b95fa161-1171-4dc2-b0be-3aa279cb717d" path="/var/lib/kubelet/pods/b95fa161-1171-4dc2-b0be-3aa279cb717d/volumes" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.735478 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc1717a4-664a-4a44-9206-0b5c472cbd50" path="/var/lib/kubelet/pods/bc1717a4-664a-4a44-9206-0b5c472cbd50/volumes" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.736207 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4b0dbf6-948b-45c4-b5a0-6027f816c873" path="/var/lib/kubelet/pods/c4b0dbf6-948b-45c4-b5a0-6027f816c873/volumes" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.737400 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e86f57ad-0eba-4794-8f64-f70609e535e8" path="/var/lib/kubelet/pods/e86f57ad-0eba-4794-8f64-f70609e535e8/volumes" Jan 23 14:35:43 crc kubenswrapper[4775]: I0123 14:35:43.738049 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1b9dee7-4afa-4bdc-88fc-f610d0bca84d" path="/var/lib/kubelet/pods/f1b9dee7-4afa-4bdc-88fc-f610d0bca84d/volumes" Jan 23 14:35:45 crc kubenswrapper[4775]: E0123 14:35:45.888241 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 23 14:35:45 crc kubenswrapper[4775]: E0123 14:35:45.892038 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 23 14:35:45 crc kubenswrapper[4775]: E0123 14:35:45.896730 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 23 14:35:45 crc kubenswrapper[4775]: E0123 14:35:45.896858 4775 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="bde4903d-4224-4139-a444-3c5baf78ff7b" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.940933 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-db-create-bfq79"] Jan 23 14:35:45 crc kubenswrapper[4775]: E0123 14:35:45.941264 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93ee5e49-16f0-402a-9d8e-6f237110e663" containerName="nova-kuttl-api-log" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.941285 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="93ee5e49-16f0-402a-9d8e-6f237110e663" containerName="nova-kuttl-api-log" Jan 23 14:35:45 crc kubenswrapper[4775]: E0123 14:35:45.941305 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="216ac3cd-4e4b-40b9-b05d-be15cfe121ed" containerName="mariadb-account-delete" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.941316 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="216ac3cd-4e4b-40b9-b05d-be15cfe121ed" containerName="mariadb-account-delete" Jan 23 14:35:45 crc kubenswrapper[4775]: E0123 14:35:45.941333 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc1717a4-664a-4a44-9206-0b5c472cbd50" containerName="mariadb-account-delete" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.941340 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc1717a4-664a-4a44-9206-0b5c472cbd50" containerName="mariadb-account-delete" Jan 23 14:35:45 crc kubenswrapper[4775]: E0123 14:35:45.941358 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a2eb109-bc5d-4ce5-af46-d5596b98b4e4" containerName="nova-kuttl-metadata-log" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.941367 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a2eb109-bc5d-4ce5-af46-d5596b98b4e4" containerName="nova-kuttl-metadata-log" Jan 23 14:35:45 crc kubenswrapper[4775]: E0123 14:35:45.941382 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93ee5e49-16f0-402a-9d8e-6f237110e663" containerName="nova-kuttl-api-api" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.941390 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="93ee5e49-16f0-402a-9d8e-6f237110e663" containerName="nova-kuttl-api-api" Jan 23 14:35:45 crc kubenswrapper[4775]: E0123 14:35:45.941404 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d8e2db8-c4c4-48ea-83a1-d750eb6de857" containerName="mariadb-account-delete" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.941412 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d8e2db8-c4c4-48ea-83a1-d750eb6de857" containerName="mariadb-account-delete" Jan 23 14:35:45 crc kubenswrapper[4775]: E0123 14:35:45.941425 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dc76b90-669a-4df4-a976-1199443a8f55" containerName="nova-kuttl-cell1-conductor-conductor" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.941433 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dc76b90-669a-4df4-a976-1199443a8f55" containerName="nova-kuttl-cell1-conductor-conductor" Jan 23 14:35:45 crc kubenswrapper[4775]: E0123 14:35:45.941443 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="053e93b4-4f28-478d-9065-20980afe9e20" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.941450 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="053e93b4-4f28-478d-9065-20980afe9e20" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:35:45 crc kubenswrapper[4775]: E0123 14:35:45.941460 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1b9dee7-4afa-4bdc-88fc-f610d0bca84d" containerName="nova-kuttl-cell0-conductor-conductor" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.941469 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1b9dee7-4afa-4bdc-88fc-f610d0bca84d" containerName="nova-kuttl-cell0-conductor-conductor" Jan 23 14:35:45 crc kubenswrapper[4775]: E0123 14:35:45.941486 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51e63565-a2ef-4d12-af2f-f3dc6c2942d9" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.941494 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="51e63565-a2ef-4d12-af2f-f3dc6c2942d9" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 23 14:35:45 crc kubenswrapper[4775]: E0123 14:35:45.941513 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a2eb109-bc5d-4ce5-af46-d5596b98b4e4" containerName="nova-kuttl-metadata-metadata" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.941521 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a2eb109-bc5d-4ce5-af46-d5596b98b4e4" containerName="nova-kuttl-metadata-metadata" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.941698 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1b9dee7-4afa-4bdc-88fc-f610d0bca84d" containerName="nova-kuttl-cell0-conductor-conductor" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.941713 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d8e2db8-c4c4-48ea-83a1-d750eb6de857" containerName="mariadb-account-delete" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.941733 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="216ac3cd-4e4b-40b9-b05d-be15cfe121ed" containerName="mariadb-account-delete" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.941750 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc1717a4-664a-4a44-9206-0b5c472cbd50" containerName="mariadb-account-delete" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.941769 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="51e63565-a2ef-4d12-af2f-f3dc6c2942d9" containerName="nova-kuttl-cell1-novncproxy-novncproxy" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.941783 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dc76b90-669a-4df4-a976-1199443a8f55" containerName="nova-kuttl-cell1-conductor-conductor" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.941818 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="93ee5e49-16f0-402a-9d8e-6f237110e663" containerName="nova-kuttl-api-api" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.941833 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="053e93b4-4f28-478d-9065-20980afe9e20" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.941845 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a2eb109-bc5d-4ce5-af46-d5596b98b4e4" containerName="nova-kuttl-metadata-log" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.941859 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="93ee5e49-16f0-402a-9d8e-6f237110e663" containerName="nova-kuttl-api-log" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.941870 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a2eb109-bc5d-4ce5-af46-d5596b98b4e4" containerName="nova-kuttl-metadata-metadata" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.942682 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-bfq79" Jan 23 14:35:45 crc kubenswrapper[4775]: I0123 14:35:45.957596 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-bfq79"] Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.034628 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-d8kgs"] Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.035595 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-d8kgs" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.037270 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqsln\" (UniqueName: \"kubernetes.io/projected/98b564d3-5399-47b6-9397-4c3b006f9e13-kube-api-access-xqsln\") pod \"nova-api-db-create-bfq79\" (UID: \"98b564d3-5399-47b6-9397-4c3b006f9e13\") " pod="nova-kuttl-default/nova-api-db-create-bfq79" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.037309 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98b564d3-5399-47b6-9397-4c3b006f9e13-operator-scripts\") pod \"nova-api-db-create-bfq79\" (UID: \"98b564d3-5399-47b6-9397-4c3b006f9e13\") " pod="nova-kuttl-default/nova-api-db-create-bfq79" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.037364 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/603674a6-1055-4e27-b370-2b57865ebc55-operator-scripts\") pod \"nova-cell0-db-create-d8kgs\" (UID: \"603674a6-1055-4e27-b370-2b57865ebc55\") " pod="nova-kuttl-default/nova-cell0-db-create-d8kgs" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.037404 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cstc5\" (UniqueName: \"kubernetes.io/projected/603674a6-1055-4e27-b370-2b57865ebc55-kube-api-access-cstc5\") pod \"nova-cell0-db-create-d8kgs\" (UID: \"603674a6-1055-4e27-b370-2b57865ebc55\") " pod="nova-kuttl-default/nova-cell0-db-create-d8kgs" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.049161 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-d8kgs"] Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.122400 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-api-31e4-account-create-update-2rd2s"] Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.123430 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-31e4-account-create-update-2rd2s" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.126086 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-api-db-secret" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.130543 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-31e4-account-create-update-2rd2s"] Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.139065 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cstc5\" (UniqueName: \"kubernetes.io/projected/603674a6-1055-4e27-b370-2b57865ebc55-kube-api-access-cstc5\") pod \"nova-cell0-db-create-d8kgs\" (UID: \"603674a6-1055-4e27-b370-2b57865ebc55\") " pod="nova-kuttl-default/nova-cell0-db-create-d8kgs" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.139138 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqsln\" (UniqueName: \"kubernetes.io/projected/98b564d3-5399-47b6-9397-4c3b006f9e13-kube-api-access-xqsln\") pod \"nova-api-db-create-bfq79\" (UID: \"98b564d3-5399-47b6-9397-4c3b006f9e13\") " pod="nova-kuttl-default/nova-api-db-create-bfq79" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.139166 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xmx4\" (UniqueName: \"kubernetes.io/projected/95df8848-8035-4302-9689-db060f7d4148-kube-api-access-8xmx4\") pod \"nova-api-31e4-account-create-update-2rd2s\" (UID: \"95df8848-8035-4302-9689-db060f7d4148\") " pod="nova-kuttl-default/nova-api-31e4-account-create-update-2rd2s" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.139190 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98b564d3-5399-47b6-9397-4c3b006f9e13-operator-scripts\") pod \"nova-api-db-create-bfq79\" (UID: \"98b564d3-5399-47b6-9397-4c3b006f9e13\") " pod="nova-kuttl-default/nova-api-db-create-bfq79" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.139231 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95df8848-8035-4302-9689-db060f7d4148-operator-scripts\") pod \"nova-api-31e4-account-create-update-2rd2s\" (UID: \"95df8848-8035-4302-9689-db060f7d4148\") " pod="nova-kuttl-default/nova-api-31e4-account-create-update-2rd2s" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.139250 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/603674a6-1055-4e27-b370-2b57865ebc55-operator-scripts\") pod \"nova-cell0-db-create-d8kgs\" (UID: \"603674a6-1055-4e27-b370-2b57865ebc55\") " pod="nova-kuttl-default/nova-cell0-db-create-d8kgs" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.139863 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/603674a6-1055-4e27-b370-2b57865ebc55-operator-scripts\") pod \"nova-cell0-db-create-d8kgs\" (UID: \"603674a6-1055-4e27-b370-2b57865ebc55\") " pod="nova-kuttl-default/nova-cell0-db-create-d8kgs" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.140412 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98b564d3-5399-47b6-9397-4c3b006f9e13-operator-scripts\") pod \"nova-api-db-create-bfq79\" (UID: \"98b564d3-5399-47b6-9397-4c3b006f9e13\") " pod="nova-kuttl-default/nova-api-db-create-bfq79" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.158279 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqsln\" (UniqueName: \"kubernetes.io/projected/98b564d3-5399-47b6-9397-4c3b006f9e13-kube-api-access-xqsln\") pod \"nova-api-db-create-bfq79\" (UID: \"98b564d3-5399-47b6-9397-4c3b006f9e13\") " pod="nova-kuttl-default/nova-api-db-create-bfq79" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.158635 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cstc5\" (UniqueName: \"kubernetes.io/projected/603674a6-1055-4e27-b370-2b57865ebc55-kube-api-access-cstc5\") pod \"nova-cell0-db-create-d8kgs\" (UID: \"603674a6-1055-4e27-b370-2b57865ebc55\") " pod="nova-kuttl-default/nova-cell0-db-create-d8kgs" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.219516 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-82jzj"] Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.220580 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-82jzj" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.232250 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-82jzj"] Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.240564 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xmx4\" (UniqueName: \"kubernetes.io/projected/95df8848-8035-4302-9689-db060f7d4148-kube-api-access-8xmx4\") pod \"nova-api-31e4-account-create-update-2rd2s\" (UID: \"95df8848-8035-4302-9689-db060f7d4148\") " pod="nova-kuttl-default/nova-api-31e4-account-create-update-2rd2s" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.240736 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95df8848-8035-4302-9689-db060f7d4148-operator-scripts\") pod \"nova-api-31e4-account-create-update-2rd2s\" (UID: \"95df8848-8035-4302-9689-db060f7d4148\") " pod="nova-kuttl-default/nova-api-31e4-account-create-update-2rd2s" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.241709 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95df8848-8035-4302-9689-db060f7d4148-operator-scripts\") pod \"nova-api-31e4-account-create-update-2rd2s\" (UID: \"95df8848-8035-4302-9689-db060f7d4148\") " pod="nova-kuttl-default/nova-api-31e4-account-create-update-2rd2s" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.257562 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xmx4\" (UniqueName: \"kubernetes.io/projected/95df8848-8035-4302-9689-db060f7d4148-kube-api-access-8xmx4\") pod \"nova-api-31e4-account-create-update-2rd2s\" (UID: \"95df8848-8035-4302-9689-db060f7d4148\") " pod="nova-kuttl-default/nova-api-31e4-account-create-update-2rd2s" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.299039 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-bfq79" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.342585 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtpfl\" (UniqueName: \"kubernetes.io/projected/891c1a15-7b44-4c8f-be11-d06333a1d0d1-kube-api-access-vtpfl\") pod \"nova-cell1-db-create-82jzj\" (UID: \"891c1a15-7b44-4c8f-be11-d06333a1d0d1\") " pod="nova-kuttl-default/nova-cell1-db-create-82jzj" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.342639 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/891c1a15-7b44-4c8f-be11-d06333a1d0d1-operator-scripts\") pod \"nova-cell1-db-create-82jzj\" (UID: \"891c1a15-7b44-4c8f-be11-d06333a1d0d1\") " pod="nova-kuttl-default/nova-cell1-db-create-82jzj" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.343380 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell0-f1e1-account-create-update-8ng7h"] Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.344330 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-f1e1-account-create-update-8ng7h" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.348999 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell0-db-secret" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.350189 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-d8kgs" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.355182 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-f1e1-account-create-update-8ng7h"] Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.439375 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-31e4-account-create-update-2rd2s" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.444598 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48eb2aff-1769-415f-b284-8d0cbf32a4e9-operator-scripts\") pod \"nova-cell0-f1e1-account-create-update-8ng7h\" (UID: \"48eb2aff-1769-415f-b284-8d0cbf32a4e9\") " pod="nova-kuttl-default/nova-cell0-f1e1-account-create-update-8ng7h" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.444632 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm97n\" (UniqueName: \"kubernetes.io/projected/48eb2aff-1769-415f-b284-8d0cbf32a4e9-kube-api-access-pm97n\") pod \"nova-cell0-f1e1-account-create-update-8ng7h\" (UID: \"48eb2aff-1769-415f-b284-8d0cbf32a4e9\") " pod="nova-kuttl-default/nova-cell0-f1e1-account-create-update-8ng7h" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.444681 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtpfl\" (UniqueName: \"kubernetes.io/projected/891c1a15-7b44-4c8f-be11-d06333a1d0d1-kube-api-access-vtpfl\") pod \"nova-cell1-db-create-82jzj\" (UID: \"891c1a15-7b44-4c8f-be11-d06333a1d0d1\") " pod="nova-kuttl-default/nova-cell1-db-create-82jzj" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.444716 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/891c1a15-7b44-4c8f-be11-d06333a1d0d1-operator-scripts\") pod \"nova-cell1-db-create-82jzj\" (UID: \"891c1a15-7b44-4c8f-be11-d06333a1d0d1\") " pod="nova-kuttl-default/nova-cell1-db-create-82jzj" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.451286 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/891c1a15-7b44-4c8f-be11-d06333a1d0d1-operator-scripts\") pod \"nova-cell1-db-create-82jzj\" (UID: \"891c1a15-7b44-4c8f-be11-d06333a1d0d1\") " pod="nova-kuttl-default/nova-cell1-db-create-82jzj" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.473915 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtpfl\" (UniqueName: \"kubernetes.io/projected/891c1a15-7b44-4c8f-be11-d06333a1d0d1-kube-api-access-vtpfl\") pod \"nova-cell1-db-create-82jzj\" (UID: \"891c1a15-7b44-4c8f-be11-d06333a1d0d1\") " pod="nova-kuttl-default/nova-cell1-db-create-82jzj" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.522064 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-cell1-574a-account-create-update-mjhg8"] Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.522992 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-574a-account-create-update-mjhg8" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.524699 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-cell1-db-secret" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.536433 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-574a-account-create-update-mjhg8"] Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.546844 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-82jzj" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.547782 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48eb2aff-1769-415f-b284-8d0cbf32a4e9-operator-scripts\") pod \"nova-cell0-f1e1-account-create-update-8ng7h\" (UID: \"48eb2aff-1769-415f-b284-8d0cbf32a4e9\") " pod="nova-kuttl-default/nova-cell0-f1e1-account-create-update-8ng7h" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.548027 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm97n\" (UniqueName: \"kubernetes.io/projected/48eb2aff-1769-415f-b284-8d0cbf32a4e9-kube-api-access-pm97n\") pod \"nova-cell0-f1e1-account-create-update-8ng7h\" (UID: \"48eb2aff-1769-415f-b284-8d0cbf32a4e9\") " pod="nova-kuttl-default/nova-cell0-f1e1-account-create-update-8ng7h" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.549703 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48eb2aff-1769-415f-b284-8d0cbf32a4e9-operator-scripts\") pod \"nova-cell0-f1e1-account-create-update-8ng7h\" (UID: \"48eb2aff-1769-415f-b284-8d0cbf32a4e9\") " pod="nova-kuttl-default/nova-cell0-f1e1-account-create-update-8ng7h" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.570400 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm97n\" (UniqueName: \"kubernetes.io/projected/48eb2aff-1769-415f-b284-8d0cbf32a4e9-kube-api-access-pm97n\") pod \"nova-cell0-f1e1-account-create-update-8ng7h\" (UID: \"48eb2aff-1769-415f-b284-8d0cbf32a4e9\") " pod="nova-kuttl-default/nova-cell0-f1e1-account-create-update-8ng7h" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.649943 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15c2fb30-3be5-4e47-b2d3-8fbd54665494-operator-scripts\") pod \"nova-cell1-574a-account-create-update-mjhg8\" (UID: \"15c2fb30-3be5-4e47-b2d3-8fbd54665494\") " pod="nova-kuttl-default/nova-cell1-574a-account-create-update-mjhg8" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.650078 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nx97\" (UniqueName: \"kubernetes.io/projected/15c2fb30-3be5-4e47-b2d3-8fbd54665494-kube-api-access-9nx97\") pod \"nova-cell1-574a-account-create-update-mjhg8\" (UID: \"15c2fb30-3be5-4e47-b2d3-8fbd54665494\") " pod="nova-kuttl-default/nova-cell1-574a-account-create-update-mjhg8" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.678748 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-f1e1-account-create-update-8ng7h" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.752814 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nx97\" (UniqueName: \"kubernetes.io/projected/15c2fb30-3be5-4e47-b2d3-8fbd54665494-kube-api-access-9nx97\") pod \"nova-cell1-574a-account-create-update-mjhg8\" (UID: \"15c2fb30-3be5-4e47-b2d3-8fbd54665494\") " pod="nova-kuttl-default/nova-cell1-574a-account-create-update-mjhg8" Jan 23 14:35:46 crc kubenswrapper[4775]: E0123 14:35:46.753637 4775 secret.go:188] Couldn't get secret nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-config-data: secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 23 14:35:46 crc kubenswrapper[4775]: E0123 14:35:46.753697 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bde4903d-4224-4139-a444-3c5baf78ff7b-config-data podName:bde4903d-4224-4139-a444-3c5baf78ff7b nodeName:}" failed. No retries permitted until 2026-01-23 14:35:54.753679991 +0000 UTC m=+1901.748508751 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/bde4903d-4224-4139-a444-3c5baf78ff7b-config-data") pod "nova-kuttl-cell1-compute-fake1-compute-0" (UID: "bde4903d-4224-4139-a444-3c5baf78ff7b") : secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.754125 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15c2fb30-3be5-4e47-b2d3-8fbd54665494-operator-scripts\") pod \"nova-cell1-574a-account-create-update-mjhg8\" (UID: \"15c2fb30-3be5-4e47-b2d3-8fbd54665494\") " pod="nova-kuttl-default/nova-cell1-574a-account-create-update-mjhg8" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.755488 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15c2fb30-3be5-4e47-b2d3-8fbd54665494-operator-scripts\") pod \"nova-cell1-574a-account-create-update-mjhg8\" (UID: \"15c2fb30-3be5-4e47-b2d3-8fbd54665494\") " pod="nova-kuttl-default/nova-cell1-574a-account-create-update-mjhg8" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.776762 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nx97\" (UniqueName: \"kubernetes.io/projected/15c2fb30-3be5-4e47-b2d3-8fbd54665494-kube-api-access-9nx97\") pod \"nova-cell1-574a-account-create-update-mjhg8\" (UID: \"15c2fb30-3be5-4e47-b2d3-8fbd54665494\") " pod="nova-kuttl-default/nova-cell1-574a-account-create-update-mjhg8" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.785532 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-db-create-bfq79"] Jan 23 14:35:46 crc kubenswrapper[4775]: W0123 14:35:46.797087 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98b564d3_5399_47b6_9397_4c3b006f9e13.slice/crio-4559880a56fe388be0ecc62012eded903f09d5c3cf72691ce0db21d15a2a9b41 WatchSource:0}: Error finding container 4559880a56fe388be0ecc62012eded903f09d5c3cf72691ce0db21d15a2a9b41: Status 404 returned error can't find the container with id 4559880a56fe388be0ecc62012eded903f09d5c3cf72691ce0db21d15a2a9b41 Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.851702 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-574a-account-create-update-mjhg8" Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.892423 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-d8kgs"] Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.907887 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell0-f1e1-account-create-update-8ng7h"] Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.952355 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-api-31e4-account-create-update-2rd2s"] Jan 23 14:35:46 crc kubenswrapper[4775]: W0123 14:35:46.977623 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95df8848_8035_4302_9689_db060f7d4148.slice/crio-55fc17579b5a2bd9e23664c5c048cd99af77520820d666c6264f076f2466cc2c WatchSource:0}: Error finding container 55fc17579b5a2bd9e23664c5c048cd99af77520820d666c6264f076f2466cc2c: Status 404 returned error can't find the container with id 55fc17579b5a2bd9e23664c5c048cd99af77520820d666c6264f076f2466cc2c Jan 23 14:35:46 crc kubenswrapper[4775]: I0123 14:35:46.997324 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-82jzj"] Jan 23 14:35:47 crc kubenswrapper[4775]: W0123 14:35:47.028298 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod891c1a15_7b44_4c8f_be11_d06333a1d0d1.slice/crio-12402f3464157138b62ec66999d41c0d51c674b6b42d2bbc30a30fe7c4b3e861 WatchSource:0}: Error finding container 12402f3464157138b62ec66999d41c0d51c674b6b42d2bbc30a30fe7c4b3e861: Status 404 returned error can't find the container with id 12402f3464157138b62ec66999d41c0d51c674b6b42d2bbc30a30fe7c4b3e861 Jan 23 14:35:47 crc kubenswrapper[4775]: I0123 14:35:47.143345 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-cell1-574a-account-create-update-mjhg8"] Jan 23 14:35:47 crc kubenswrapper[4775]: I0123 14:35:47.185126 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-f1e1-account-create-update-8ng7h" event={"ID":"48eb2aff-1769-415f-b284-8d0cbf32a4e9","Type":"ContainerStarted","Data":"594fa043ec888b92b711a5fa6f9217304672bf0df3d16f28c04888ef7084f11f"} Jan 23 14:35:47 crc kubenswrapper[4775]: I0123 14:35:47.189077 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-bfq79" event={"ID":"98b564d3-5399-47b6-9397-4c3b006f9e13","Type":"ContainerStarted","Data":"fad204a9922c6b587aa30b8277005173345d455f94c99d5d275be428107c4c7c"} Jan 23 14:35:47 crc kubenswrapper[4775]: I0123 14:35:47.189100 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-bfq79" event={"ID":"98b564d3-5399-47b6-9397-4c3b006f9e13","Type":"ContainerStarted","Data":"4559880a56fe388be0ecc62012eded903f09d5c3cf72691ce0db21d15a2a9b41"} Jan 23 14:35:47 crc kubenswrapper[4775]: I0123 14:35:47.192776 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-82jzj" event={"ID":"891c1a15-7b44-4c8f-be11-d06333a1d0d1","Type":"ContainerStarted","Data":"12402f3464157138b62ec66999d41c0d51c674b6b42d2bbc30a30fe7c4b3e861"} Jan 23 14:35:47 crc kubenswrapper[4775]: I0123 14:35:47.193716 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-574a-account-create-update-mjhg8" event={"ID":"15c2fb30-3be5-4e47-b2d3-8fbd54665494","Type":"ContainerStarted","Data":"4647aae651352dc525c6f8ea2dcb7dad8d5914c55c29da6b223800393e5bbbb9"} Jan 23 14:35:47 crc kubenswrapper[4775]: I0123 14:35:47.194596 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-31e4-account-create-update-2rd2s" event={"ID":"95df8848-8035-4302-9689-db060f7d4148","Type":"ContainerStarted","Data":"55fc17579b5a2bd9e23664c5c048cd99af77520820d666c6264f076f2466cc2c"} Jan 23 14:35:47 crc kubenswrapper[4775]: I0123 14:35:47.197414 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-d8kgs" event={"ID":"603674a6-1055-4e27-b370-2b57865ebc55","Type":"ContainerStarted","Data":"0eff9d8eee28ce912e21c7c4f7871ae916bc9d5ed3ea4fca779e82c2788bb4b7"} Jan 23 14:35:47 crc kubenswrapper[4775]: I0123 14:35:47.197439 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-d8kgs" event={"ID":"603674a6-1055-4e27-b370-2b57865ebc55","Type":"ContainerStarted","Data":"f71b171cbcec937d3096b9d1b22617ac009f36ef7a23e82cc7cf28528f40caf7"} Jan 23 14:35:47 crc kubenswrapper[4775]: I0123 14:35:47.209559 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-api-db-create-bfq79" podStartSLOduration=2.209544843 podStartE2EDuration="2.209544843s" podCreationTimestamp="2026-01-23 14:35:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:35:47.206241604 +0000 UTC m=+1894.201070344" watchObservedRunningTime="2026-01-23 14:35:47.209544843 +0000 UTC m=+1894.204373583" Jan 23 14:35:47 crc kubenswrapper[4775]: I0123 14:35:47.221957 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-cell0-db-create-d8kgs" podStartSLOduration=1.221939947 podStartE2EDuration="1.221939947s" podCreationTimestamp="2026-01-23 14:35:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:35:47.218880695 +0000 UTC m=+1894.213709425" watchObservedRunningTime="2026-01-23 14:35:47.221939947 +0000 UTC m=+1894.216768687" Jan 23 14:35:48 crc kubenswrapper[4775]: I0123 14:35:48.210235 4775 generic.go:334] "Generic (PLEG): container finished" podID="891c1a15-7b44-4c8f-be11-d06333a1d0d1" containerID="3b2dfb102f46ee1631a2160c9d3d2f454d0244cb082c8318b072e1947bb67ce1" exitCode=0 Jan 23 14:35:48 crc kubenswrapper[4775]: I0123 14:35:48.210335 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-82jzj" event={"ID":"891c1a15-7b44-4c8f-be11-d06333a1d0d1","Type":"ContainerDied","Data":"3b2dfb102f46ee1631a2160c9d3d2f454d0244cb082c8318b072e1947bb67ce1"} Jan 23 14:35:48 crc kubenswrapper[4775]: I0123 14:35:48.220121 4775 generic.go:334] "Generic (PLEG): container finished" podID="15c2fb30-3be5-4e47-b2d3-8fbd54665494" containerID="f75e094c5540e8cb925dd39cbb448ad5adf94fb3b2f88a9a2855acad38942424" exitCode=0 Jan 23 14:35:48 crc kubenswrapper[4775]: I0123 14:35:48.220241 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-574a-account-create-update-mjhg8" event={"ID":"15c2fb30-3be5-4e47-b2d3-8fbd54665494","Type":"ContainerDied","Data":"f75e094c5540e8cb925dd39cbb448ad5adf94fb3b2f88a9a2855acad38942424"} Jan 23 14:35:48 crc kubenswrapper[4775]: I0123 14:35:48.225082 4775 generic.go:334] "Generic (PLEG): container finished" podID="95df8848-8035-4302-9689-db060f7d4148" containerID="5022709a82d85e5efe22de467daeee972c2edbb45f0956772656b5f2da7c871d" exitCode=0 Jan 23 14:35:48 crc kubenswrapper[4775]: I0123 14:35:48.225226 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-31e4-account-create-update-2rd2s" event={"ID":"95df8848-8035-4302-9689-db060f7d4148","Type":"ContainerDied","Data":"5022709a82d85e5efe22de467daeee972c2edbb45f0956772656b5f2da7c871d"} Jan 23 14:35:48 crc kubenswrapper[4775]: I0123 14:35:48.238594 4775 generic.go:334] "Generic (PLEG): container finished" podID="603674a6-1055-4e27-b370-2b57865ebc55" containerID="0eff9d8eee28ce912e21c7c4f7871ae916bc9d5ed3ea4fca779e82c2788bb4b7" exitCode=0 Jan 23 14:35:48 crc kubenswrapper[4775]: I0123 14:35:48.238732 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-d8kgs" event={"ID":"603674a6-1055-4e27-b370-2b57865ebc55","Type":"ContainerDied","Data":"0eff9d8eee28ce912e21c7c4f7871ae916bc9d5ed3ea4fca779e82c2788bb4b7"} Jan 23 14:35:48 crc kubenswrapper[4775]: I0123 14:35:48.241627 4775 generic.go:334] "Generic (PLEG): container finished" podID="48eb2aff-1769-415f-b284-8d0cbf32a4e9" containerID="9181f36c62e9c5f12ea45cd0ada22e77d0a8f8e6dddcf6191c606aedb0bccd71" exitCode=0 Jan 23 14:35:48 crc kubenswrapper[4775]: I0123 14:35:48.241734 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-f1e1-account-create-update-8ng7h" event={"ID":"48eb2aff-1769-415f-b284-8d0cbf32a4e9","Type":"ContainerDied","Data":"9181f36c62e9c5f12ea45cd0ada22e77d0a8f8e6dddcf6191c606aedb0bccd71"} Jan 23 14:35:48 crc kubenswrapper[4775]: I0123 14:35:48.249478 4775 generic.go:334] "Generic (PLEG): container finished" podID="98b564d3-5399-47b6-9397-4c3b006f9e13" containerID="fad204a9922c6b587aa30b8277005173345d455f94c99d5d275be428107c4c7c" exitCode=0 Jan 23 14:35:48 crc kubenswrapper[4775]: I0123 14:35:48.249559 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-bfq79" event={"ID":"98b564d3-5399-47b6-9397-4c3b006f9e13","Type":"ContainerDied","Data":"fad204a9922c6b587aa30b8277005173345d455f94c99d5d275be428107c4c7c"} Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.733302 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-f1e1-account-create-update-8ng7h" Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.817647 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-d8kgs" Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.822216 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-82jzj" Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.835063 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-574a-account-create-update-mjhg8" Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.837750 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-31e4-account-create-update-2rd2s" Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.844229 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-bfq79" Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.856479 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pm97n\" (UniqueName: \"kubernetes.io/projected/48eb2aff-1769-415f-b284-8d0cbf32a4e9-kube-api-access-pm97n\") pod \"48eb2aff-1769-415f-b284-8d0cbf32a4e9\" (UID: \"48eb2aff-1769-415f-b284-8d0cbf32a4e9\") " Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.856525 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48eb2aff-1769-415f-b284-8d0cbf32a4e9-operator-scripts\") pod \"48eb2aff-1769-415f-b284-8d0cbf32a4e9\" (UID: \"48eb2aff-1769-415f-b284-8d0cbf32a4e9\") " Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.858088 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48eb2aff-1769-415f-b284-8d0cbf32a4e9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "48eb2aff-1769-415f-b284-8d0cbf32a4e9" (UID: "48eb2aff-1769-415f-b284-8d0cbf32a4e9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.859070 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48eb2aff-1769-415f-b284-8d0cbf32a4e9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.863314 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48eb2aff-1769-415f-b284-8d0cbf32a4e9-kube-api-access-pm97n" (OuterVolumeSpecName: "kube-api-access-pm97n") pod "48eb2aff-1769-415f-b284-8d0cbf32a4e9" (UID: "48eb2aff-1769-415f-b284-8d0cbf32a4e9"). InnerVolumeSpecName "kube-api-access-pm97n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.960467 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/603674a6-1055-4e27-b370-2b57865ebc55-operator-scripts\") pod \"603674a6-1055-4e27-b370-2b57865ebc55\" (UID: \"603674a6-1055-4e27-b370-2b57865ebc55\") " Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.960536 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15c2fb30-3be5-4e47-b2d3-8fbd54665494-operator-scripts\") pod \"15c2fb30-3be5-4e47-b2d3-8fbd54665494\" (UID: \"15c2fb30-3be5-4e47-b2d3-8fbd54665494\") " Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.960570 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqsln\" (UniqueName: \"kubernetes.io/projected/98b564d3-5399-47b6-9397-4c3b006f9e13-kube-api-access-xqsln\") pod \"98b564d3-5399-47b6-9397-4c3b006f9e13\" (UID: \"98b564d3-5399-47b6-9397-4c3b006f9e13\") " Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.960610 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95df8848-8035-4302-9689-db060f7d4148-operator-scripts\") pod \"95df8848-8035-4302-9689-db060f7d4148\" (UID: \"95df8848-8035-4302-9689-db060f7d4148\") " Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.960630 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/891c1a15-7b44-4c8f-be11-d06333a1d0d1-operator-scripts\") pod \"891c1a15-7b44-4c8f-be11-d06333a1d0d1\" (UID: \"891c1a15-7b44-4c8f-be11-d06333a1d0d1\") " Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.960659 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98b564d3-5399-47b6-9397-4c3b006f9e13-operator-scripts\") pod \"98b564d3-5399-47b6-9397-4c3b006f9e13\" (UID: \"98b564d3-5399-47b6-9397-4c3b006f9e13\") " Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.960678 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xmx4\" (UniqueName: \"kubernetes.io/projected/95df8848-8035-4302-9689-db060f7d4148-kube-api-access-8xmx4\") pod \"95df8848-8035-4302-9689-db060f7d4148\" (UID: \"95df8848-8035-4302-9689-db060f7d4148\") " Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.960744 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cstc5\" (UniqueName: \"kubernetes.io/projected/603674a6-1055-4e27-b370-2b57865ebc55-kube-api-access-cstc5\") pod \"603674a6-1055-4e27-b370-2b57865ebc55\" (UID: \"603674a6-1055-4e27-b370-2b57865ebc55\") " Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.960771 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nx97\" (UniqueName: \"kubernetes.io/projected/15c2fb30-3be5-4e47-b2d3-8fbd54665494-kube-api-access-9nx97\") pod \"15c2fb30-3be5-4e47-b2d3-8fbd54665494\" (UID: \"15c2fb30-3be5-4e47-b2d3-8fbd54665494\") " Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.960829 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtpfl\" (UniqueName: \"kubernetes.io/projected/891c1a15-7b44-4c8f-be11-d06333a1d0d1-kube-api-access-vtpfl\") pod \"891c1a15-7b44-4c8f-be11-d06333a1d0d1\" (UID: \"891c1a15-7b44-4c8f-be11-d06333a1d0d1\") " Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.961059 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pm97n\" (UniqueName: \"kubernetes.io/projected/48eb2aff-1769-415f-b284-8d0cbf32a4e9-kube-api-access-pm97n\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.961635 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/891c1a15-7b44-4c8f-be11-d06333a1d0d1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "891c1a15-7b44-4c8f-be11-d06333a1d0d1" (UID: "891c1a15-7b44-4c8f-be11-d06333a1d0d1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.961775 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15c2fb30-3be5-4e47-b2d3-8fbd54665494-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "15c2fb30-3be5-4e47-b2d3-8fbd54665494" (UID: "15c2fb30-3be5-4e47-b2d3-8fbd54665494"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.962034 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98b564d3-5399-47b6-9397-4c3b006f9e13-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "98b564d3-5399-47b6-9397-4c3b006f9e13" (UID: "98b564d3-5399-47b6-9397-4c3b006f9e13"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.962184 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95df8848-8035-4302-9689-db060f7d4148-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "95df8848-8035-4302-9689-db060f7d4148" (UID: "95df8848-8035-4302-9689-db060f7d4148"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.962215 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/603674a6-1055-4e27-b370-2b57865ebc55-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "603674a6-1055-4e27-b370-2b57865ebc55" (UID: "603674a6-1055-4e27-b370-2b57865ebc55"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.964598 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98b564d3-5399-47b6-9397-4c3b006f9e13-kube-api-access-xqsln" (OuterVolumeSpecName: "kube-api-access-xqsln") pod "98b564d3-5399-47b6-9397-4c3b006f9e13" (UID: "98b564d3-5399-47b6-9397-4c3b006f9e13"). InnerVolumeSpecName "kube-api-access-xqsln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.964949 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/891c1a15-7b44-4c8f-be11-d06333a1d0d1-kube-api-access-vtpfl" (OuterVolumeSpecName: "kube-api-access-vtpfl") pod "891c1a15-7b44-4c8f-be11-d06333a1d0d1" (UID: "891c1a15-7b44-4c8f-be11-d06333a1d0d1"). InnerVolumeSpecName "kube-api-access-vtpfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.965698 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95df8848-8035-4302-9689-db060f7d4148-kube-api-access-8xmx4" (OuterVolumeSpecName: "kube-api-access-8xmx4") pod "95df8848-8035-4302-9689-db060f7d4148" (UID: "95df8848-8035-4302-9689-db060f7d4148"). InnerVolumeSpecName "kube-api-access-8xmx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.965996 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/603674a6-1055-4e27-b370-2b57865ebc55-kube-api-access-cstc5" (OuterVolumeSpecName: "kube-api-access-cstc5") pod "603674a6-1055-4e27-b370-2b57865ebc55" (UID: "603674a6-1055-4e27-b370-2b57865ebc55"). InnerVolumeSpecName "kube-api-access-cstc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:49 crc kubenswrapper[4775]: I0123 14:35:49.966493 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15c2fb30-3be5-4e47-b2d3-8fbd54665494-kube-api-access-9nx97" (OuterVolumeSpecName: "kube-api-access-9nx97") pod "15c2fb30-3be5-4e47-b2d3-8fbd54665494" (UID: "15c2fb30-3be5-4e47-b2d3-8fbd54665494"). InnerVolumeSpecName "kube-api-access-9nx97". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.063216 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/15c2fb30-3be5-4e47-b2d3-8fbd54665494-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.063266 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqsln\" (UniqueName: \"kubernetes.io/projected/98b564d3-5399-47b6-9397-4c3b006f9e13-kube-api-access-xqsln\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.063287 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/95df8848-8035-4302-9689-db060f7d4148-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.063305 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/891c1a15-7b44-4c8f-be11-d06333a1d0d1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.063322 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/98b564d3-5399-47b6-9397-4c3b006f9e13-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.063339 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xmx4\" (UniqueName: \"kubernetes.io/projected/95df8848-8035-4302-9689-db060f7d4148-kube-api-access-8xmx4\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.063356 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cstc5\" (UniqueName: \"kubernetes.io/projected/603674a6-1055-4e27-b370-2b57865ebc55-kube-api-access-cstc5\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.063373 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nx97\" (UniqueName: \"kubernetes.io/projected/15c2fb30-3be5-4e47-b2d3-8fbd54665494-kube-api-access-9nx97\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.063390 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtpfl\" (UniqueName: \"kubernetes.io/projected/891c1a15-7b44-4c8f-be11-d06333a1d0d1-kube-api-access-vtpfl\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.063408 4775 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/603674a6-1055-4e27-b370-2b57865ebc55-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.274283 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-f1e1-account-create-update-8ng7h" event={"ID":"48eb2aff-1769-415f-b284-8d0cbf32a4e9","Type":"ContainerDied","Data":"594fa043ec888b92b711a5fa6f9217304672bf0df3d16f28c04888ef7084f11f"} Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.274703 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="594fa043ec888b92b711a5fa6f9217304672bf0df3d16f28c04888ef7084f11f" Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.275013 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-f1e1-account-create-update-8ng7h" Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.280353 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-db-create-82jzj" Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.280342 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-db-create-82jzj" event={"ID":"891c1a15-7b44-4c8f-be11-d06333a1d0d1","Type":"ContainerDied","Data":"12402f3464157138b62ec66999d41c0d51c674b6b42d2bbc30a30fe7c4b3e861"} Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.280530 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12402f3464157138b62ec66999d41c0d51c674b6b42d2bbc30a30fe7c4b3e861" Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.283106 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-db-create-bfq79" Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.283577 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-db-create-bfq79" event={"ID":"98b564d3-5399-47b6-9397-4c3b006f9e13","Type":"ContainerDied","Data":"4559880a56fe388be0ecc62012eded903f09d5c3cf72691ce0db21d15a2a9b41"} Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.283648 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4559880a56fe388be0ecc62012eded903f09d5c3cf72691ce0db21d15a2a9b41" Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.287687 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell1-574a-account-create-update-mjhg8" event={"ID":"15c2fb30-3be5-4e47-b2d3-8fbd54665494","Type":"ContainerDied","Data":"4647aae651352dc525c6f8ea2dcb7dad8d5914c55c29da6b223800393e5bbbb9"} Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.287771 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4647aae651352dc525c6f8ea2dcb7dad8d5914c55c29da6b223800393e5bbbb9" Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.285778 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell1-574a-account-create-update-mjhg8" Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.288661 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-api-31e4-account-create-update-2rd2s" event={"ID":"95df8848-8035-4302-9689-db060f7d4148","Type":"ContainerDied","Data":"55fc17579b5a2bd9e23664c5c048cd99af77520820d666c6264f076f2466cc2c"} Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.288767 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55fc17579b5a2bd9e23664c5c048cd99af77520820d666c6264f076f2466cc2c" Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.289163 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-api-31e4-account-create-update-2rd2s" Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.291181 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-cell0-db-create-d8kgs" event={"ID":"603674a6-1055-4e27-b370-2b57865ebc55","Type":"ContainerDied","Data":"f71b171cbcec937d3096b9d1b22617ac009f36ef7a23e82cc7cf28528f40caf7"} Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.291230 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f71b171cbcec937d3096b9d1b22617ac009f36ef7a23e82cc7cf28528f40caf7" Jan 23 14:35:50 crc kubenswrapper[4775]: I0123 14:35:50.291257 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-cell0-db-create-d8kgs" Jan 23 14:35:50 crc kubenswrapper[4775]: E0123 14:35:50.888320 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 23 14:35:50 crc kubenswrapper[4775]: E0123 14:35:50.890790 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 23 14:35:50 crc kubenswrapper[4775]: E0123 14:35:50.897152 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 23 14:35:50 crc kubenswrapper[4775]: E0123 14:35:50.897261 4775 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="bde4903d-4224-4139-a444-3c5baf78ff7b" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.592096 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8"] Jan 23 14:35:51 crc kubenswrapper[4775]: E0123 14:35:51.592456 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="603674a6-1055-4e27-b370-2b57865ebc55" containerName="mariadb-database-create" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.592482 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="603674a6-1055-4e27-b370-2b57865ebc55" containerName="mariadb-database-create" Jan 23 14:35:51 crc kubenswrapper[4775]: E0123 14:35:51.592501 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95df8848-8035-4302-9689-db060f7d4148" containerName="mariadb-account-create-update" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.592510 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="95df8848-8035-4302-9689-db060f7d4148" containerName="mariadb-account-create-update" Jan 23 14:35:51 crc kubenswrapper[4775]: E0123 14:35:51.592540 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="891c1a15-7b44-4c8f-be11-d06333a1d0d1" containerName="mariadb-database-create" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.592551 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="891c1a15-7b44-4c8f-be11-d06333a1d0d1" containerName="mariadb-database-create" Jan 23 14:35:51 crc kubenswrapper[4775]: E0123 14:35:51.592569 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48eb2aff-1769-415f-b284-8d0cbf32a4e9" containerName="mariadb-account-create-update" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.592579 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="48eb2aff-1769-415f-b284-8d0cbf32a4e9" containerName="mariadb-account-create-update" Jan 23 14:35:51 crc kubenswrapper[4775]: E0123 14:35:51.592598 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98b564d3-5399-47b6-9397-4c3b006f9e13" containerName="mariadb-database-create" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.592607 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="98b564d3-5399-47b6-9397-4c3b006f9e13" containerName="mariadb-database-create" Jan 23 14:35:51 crc kubenswrapper[4775]: E0123 14:35:51.592627 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15c2fb30-3be5-4e47-b2d3-8fbd54665494" containerName="mariadb-account-create-update" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.592638 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="15c2fb30-3be5-4e47-b2d3-8fbd54665494" containerName="mariadb-account-create-update" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.592947 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="98b564d3-5399-47b6-9397-4c3b006f9e13" containerName="mariadb-database-create" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.592968 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="48eb2aff-1769-415f-b284-8d0cbf32a4e9" containerName="mariadb-account-create-update" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.592987 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="95df8848-8035-4302-9689-db060f7d4148" containerName="mariadb-account-create-update" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.593004 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="15c2fb30-3be5-4e47-b2d3-8fbd54665494" containerName="mariadb-account-create-update" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.593019 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="603674a6-1055-4e27-b370-2b57865ebc55" containerName="mariadb-database-create" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.593038 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="891c1a15-7b44-4c8f-be11-d06333a1d0d1" containerName="mariadb-database-create" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.593644 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.597085 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-scripts" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.597172 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-nova-kuttl-dockercfg-8xglt" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.597475 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.631220 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8"] Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.693663 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2swfp\" (UniqueName: \"kubernetes.io/projected/12f70e17-ec31-43fc-ac56-d1742f962de5-kube-api-access-2swfp\") pod \"nova-kuttl-cell0-conductor-db-sync-2l6n8\" (UID: \"12f70e17-ec31-43fc-ac56-d1742f962de5\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.693716 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12f70e17-ec31-43fc-ac56-d1742f962de5-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-2l6n8\" (UID: \"12f70e17-ec31-43fc-ac56-d1742f962de5\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.693839 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12f70e17-ec31-43fc-ac56-d1742f962de5-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-2l6n8\" (UID: \"12f70e17-ec31-43fc-ac56-d1742f962de5\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.795181 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12f70e17-ec31-43fc-ac56-d1742f962de5-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-2l6n8\" (UID: \"12f70e17-ec31-43fc-ac56-d1742f962de5\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.795289 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2swfp\" (UniqueName: \"kubernetes.io/projected/12f70e17-ec31-43fc-ac56-d1742f962de5-kube-api-access-2swfp\") pod \"nova-kuttl-cell0-conductor-db-sync-2l6n8\" (UID: \"12f70e17-ec31-43fc-ac56-d1742f962de5\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.795374 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12f70e17-ec31-43fc-ac56-d1742f962de5-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-2l6n8\" (UID: \"12f70e17-ec31-43fc-ac56-d1742f962de5\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.800453 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12f70e17-ec31-43fc-ac56-d1742f962de5-scripts\") pod \"nova-kuttl-cell0-conductor-db-sync-2l6n8\" (UID: \"12f70e17-ec31-43fc-ac56-d1742f962de5\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.801730 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12f70e17-ec31-43fc-ac56-d1742f962de5-config-data\") pod \"nova-kuttl-cell0-conductor-db-sync-2l6n8\" (UID: \"12f70e17-ec31-43fc-ac56-d1742f962de5\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.816933 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2swfp\" (UniqueName: \"kubernetes.io/projected/12f70e17-ec31-43fc-ac56-d1742f962de5-kube-api-access-2swfp\") pod \"nova-kuttl-cell0-conductor-db-sync-2l6n8\" (UID: \"12f70e17-ec31-43fc-ac56-d1742f962de5\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8" Jan 23 14:35:51 crc kubenswrapper[4775]: I0123 14:35:51.914361 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8" Jan 23 14:35:52 crc kubenswrapper[4775]: I0123 14:35:52.413929 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8"] Jan 23 14:35:53 crc kubenswrapper[4775]: I0123 14:35:53.325852 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8" event={"ID":"12f70e17-ec31-43fc-ac56-d1742f962de5","Type":"ContainerStarted","Data":"60accca565e62d33f56b52cced99fb327dbdd19ac23aa7c351971c0a1d7d06f7"} Jan 23 14:35:53 crc kubenswrapper[4775]: I0123 14:35:53.326394 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8" event={"ID":"12f70e17-ec31-43fc-ac56-d1742f962de5","Type":"ContainerStarted","Data":"ec0a8891924adf3fbb6081c0ef9843f0a923757c4c38b106643231b37bfab045"} Jan 23 14:35:53 crc kubenswrapper[4775]: I0123 14:35:53.352543 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8" podStartSLOduration=2.352528503 podStartE2EDuration="2.352528503s" podCreationTimestamp="2026-01-23 14:35:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:35:53.348455424 +0000 UTC m=+1900.343284194" watchObservedRunningTime="2026-01-23 14:35:53.352528503 +0000 UTC m=+1900.347357233" Jan 23 14:35:54 crc kubenswrapper[4775]: E0123 14:35:54.775395 4775 secret.go:188] Couldn't get secret nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-config-data: secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 23 14:35:54 crc kubenswrapper[4775]: E0123 14:35:54.775920 4775 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bde4903d-4224-4139-a444-3c5baf78ff7b-config-data podName:bde4903d-4224-4139-a444-3c5baf78ff7b nodeName:}" failed. No retries permitted until 2026-01-23 14:36:10.775884953 +0000 UTC m=+1917.770713723 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/bde4903d-4224-4139-a444-3c5baf78ff7b-config-data") pod "nova-kuttl-cell1-compute-fake1-compute-0" (UID: "bde4903d-4224-4139-a444-3c5baf78ff7b") : secret "nova-kuttl-cell1-compute-fake1-compute-config-data" not found Jan 23 14:35:55 crc kubenswrapper[4775]: E0123 14:35:55.887929 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 23 14:35:55 crc kubenswrapper[4775]: E0123 14:35:55.891402 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 23 14:35:55 crc kubenswrapper[4775]: E0123 14:35:55.893498 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 23 14:35:55 crc kubenswrapper[4775]: E0123 14:35:55.893572 4775 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="bde4903d-4224-4139-a444-3c5baf78ff7b" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 23 14:35:57 crc kubenswrapper[4775]: I0123 14:35:57.372969 4775 generic.go:334] "Generic (PLEG): container finished" podID="12f70e17-ec31-43fc-ac56-d1742f962de5" containerID="60accca565e62d33f56b52cced99fb327dbdd19ac23aa7c351971c0a1d7d06f7" exitCode=0 Jan 23 14:35:57 crc kubenswrapper[4775]: I0123 14:35:57.373060 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8" event={"ID":"12f70e17-ec31-43fc-ac56-d1742f962de5","Type":"ContainerDied","Data":"60accca565e62d33f56b52cced99fb327dbdd19ac23aa7c351971c0a1d7d06f7"} Jan 23 14:35:58 crc kubenswrapper[4775]: I0123 14:35:58.837787 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8" Jan 23 14:35:58 crc kubenswrapper[4775]: I0123 14:35:58.848115 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12f70e17-ec31-43fc-ac56-d1742f962de5-scripts\") pod \"12f70e17-ec31-43fc-ac56-d1742f962de5\" (UID: \"12f70e17-ec31-43fc-ac56-d1742f962de5\") " Jan 23 14:35:58 crc kubenswrapper[4775]: I0123 14:35:58.848209 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12f70e17-ec31-43fc-ac56-d1742f962de5-config-data\") pod \"12f70e17-ec31-43fc-ac56-d1742f962de5\" (UID: \"12f70e17-ec31-43fc-ac56-d1742f962de5\") " Jan 23 14:35:58 crc kubenswrapper[4775]: I0123 14:35:58.848270 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2swfp\" (UniqueName: \"kubernetes.io/projected/12f70e17-ec31-43fc-ac56-d1742f962de5-kube-api-access-2swfp\") pod \"12f70e17-ec31-43fc-ac56-d1742f962de5\" (UID: \"12f70e17-ec31-43fc-ac56-d1742f962de5\") " Jan 23 14:35:58 crc kubenswrapper[4775]: I0123 14:35:58.857841 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12f70e17-ec31-43fc-ac56-d1742f962de5-kube-api-access-2swfp" (OuterVolumeSpecName: "kube-api-access-2swfp") pod "12f70e17-ec31-43fc-ac56-d1742f962de5" (UID: "12f70e17-ec31-43fc-ac56-d1742f962de5"). InnerVolumeSpecName "kube-api-access-2swfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:35:58 crc kubenswrapper[4775]: I0123 14:35:58.858056 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12f70e17-ec31-43fc-ac56-d1742f962de5-scripts" (OuterVolumeSpecName: "scripts") pod "12f70e17-ec31-43fc-ac56-d1742f962de5" (UID: "12f70e17-ec31-43fc-ac56-d1742f962de5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:35:58 crc kubenswrapper[4775]: I0123 14:35:58.894411 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12f70e17-ec31-43fc-ac56-d1742f962de5-config-data" (OuterVolumeSpecName: "config-data") pod "12f70e17-ec31-43fc-ac56-d1742f962de5" (UID: "12f70e17-ec31-43fc-ac56-d1742f962de5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:35:58 crc kubenswrapper[4775]: I0123 14:35:58.954186 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/12f70e17-ec31-43fc-ac56-d1742f962de5-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:58 crc kubenswrapper[4775]: I0123 14:35:58.954215 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12f70e17-ec31-43fc-ac56-d1742f962de5-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:58 crc kubenswrapper[4775]: I0123 14:35:58.954225 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2swfp\" (UniqueName: \"kubernetes.io/projected/12f70e17-ec31-43fc-ac56-d1742f962de5-kube-api-access-2swfp\") on node \"crc\" DevicePath \"\"" Jan 23 14:35:59 crc kubenswrapper[4775]: I0123 14:35:59.398414 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8" event={"ID":"12f70e17-ec31-43fc-ac56-d1742f962de5","Type":"ContainerDied","Data":"ec0a8891924adf3fbb6081c0ef9843f0a923757c4c38b106643231b37bfab045"} Jan 23 14:35:59 crc kubenswrapper[4775]: I0123 14:35:59.398485 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec0a8891924adf3fbb6081c0ef9843f0a923757c4c38b106643231b37bfab045" Jan 23 14:35:59 crc kubenswrapper[4775]: I0123 14:35:59.398568 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8" Jan 23 14:35:59 crc kubenswrapper[4775]: I0123 14:35:59.483965 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:35:59 crc kubenswrapper[4775]: E0123 14:35:59.484481 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12f70e17-ec31-43fc-ac56-d1742f962de5" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 23 14:35:59 crc kubenswrapper[4775]: I0123 14:35:59.484513 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="12f70e17-ec31-43fc-ac56-d1742f962de5" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 23 14:35:59 crc kubenswrapper[4775]: I0123 14:35:59.484789 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="12f70e17-ec31-43fc-ac56-d1742f962de5" containerName="nova-kuttl-cell0-conductor-db-sync" Jan 23 14:35:59 crc kubenswrapper[4775]: I0123 14:35:59.485467 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:35:59 crc kubenswrapper[4775]: I0123 14:35:59.492182 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-conductor-config-data" Jan 23 14:35:59 crc kubenswrapper[4775]: I0123 14:35:59.503503 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-nova-kuttl-dockercfg-8xglt" Jan 23 14:35:59 crc kubenswrapper[4775]: I0123 14:35:59.510884 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:35:59 crc kubenswrapper[4775]: I0123 14:35:59.568687 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fab3b1c6-093c-4891-957c-fad86eb8fd31-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"fab3b1c6-093c-4891-957c-fad86eb8fd31\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:35:59 crc kubenswrapper[4775]: I0123 14:35:59.568791 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjzxs\" (UniqueName: \"kubernetes.io/projected/fab3b1c6-093c-4891-957c-fad86eb8fd31-kube-api-access-zjzxs\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"fab3b1c6-093c-4891-957c-fad86eb8fd31\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:35:59 crc kubenswrapper[4775]: I0123 14:35:59.669961 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjzxs\" (UniqueName: \"kubernetes.io/projected/fab3b1c6-093c-4891-957c-fad86eb8fd31-kube-api-access-zjzxs\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"fab3b1c6-093c-4891-957c-fad86eb8fd31\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:35:59 crc kubenswrapper[4775]: I0123 14:35:59.670476 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fab3b1c6-093c-4891-957c-fad86eb8fd31-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"fab3b1c6-093c-4891-957c-fad86eb8fd31\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:35:59 crc kubenswrapper[4775]: I0123 14:35:59.674230 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fab3b1c6-093c-4891-957c-fad86eb8fd31-config-data\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"fab3b1c6-093c-4891-957c-fad86eb8fd31\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:35:59 crc kubenswrapper[4775]: I0123 14:35:59.689951 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjzxs\" (UniqueName: \"kubernetes.io/projected/fab3b1c6-093c-4891-957c-fad86eb8fd31-kube-api-access-zjzxs\") pod \"nova-kuttl-cell0-conductor-0\" (UID: \"fab3b1c6-093c-4891-957c-fad86eb8fd31\") " pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:35:59 crc kubenswrapper[4775]: I0123 14:35:59.810185 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:36:00 crc kubenswrapper[4775]: I0123 14:36:00.305510 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-0"] Jan 23 14:36:00 crc kubenswrapper[4775]: W0123 14:36:00.313048 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfab3b1c6_093c_4891_957c_fad86eb8fd31.slice/crio-a4af92dc82020f848fc10739302c2652938db547b6bcb453819e4595c3f34e60 WatchSource:0}: Error finding container a4af92dc82020f848fc10739302c2652938db547b6bcb453819e4595c3f34e60: Status 404 returned error can't find the container with id a4af92dc82020f848fc10739302c2652938db547b6bcb453819e4595c3f34e60 Jan 23 14:36:00 crc kubenswrapper[4775]: I0123 14:36:00.414279 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"fab3b1c6-093c-4891-957c-fad86eb8fd31","Type":"ContainerStarted","Data":"a4af92dc82020f848fc10739302c2652938db547b6bcb453819e4595c3f34e60"} Jan 23 14:36:00 crc kubenswrapper[4775]: E0123 14:36:00.889395 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 23 14:36:00 crc kubenswrapper[4775]: E0123 14:36:00.891398 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 23 14:36:00 crc kubenswrapper[4775]: E0123 14:36:00.896714 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 23 14:36:00 crc kubenswrapper[4775]: E0123 14:36:00.896761 4775 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="bde4903d-4224-4139-a444-3c5baf78ff7b" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 23 14:36:01 crc kubenswrapper[4775]: I0123 14:36:01.424826 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" event={"ID":"fab3b1c6-093c-4891-957c-fad86eb8fd31","Type":"ContainerStarted","Data":"6e7d6c4e51f6df27b5bf4d3033ffb1fe6002c520e973c427463e015965f2ce9d"} Jan 23 14:36:01 crc kubenswrapper[4775]: I0123 14:36:01.425056 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:36:01 crc kubenswrapper[4775]: I0123 14:36:01.439917 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" podStartSLOduration=2.439900502 podStartE2EDuration="2.439900502s" podCreationTimestamp="2026-01-23 14:35:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:36:01.439006228 +0000 UTC m=+1908.433834968" watchObservedRunningTime="2026-01-23 14:36:01.439900502 +0000 UTC m=+1908.434729242" Jan 23 14:36:05 crc kubenswrapper[4775]: E0123 14:36:05.887652 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 23 14:36:05 crc kubenswrapper[4775]: E0123 14:36:05.889644 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 23 14:36:05 crc kubenswrapper[4775]: E0123 14:36:05.891437 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" cmd=["/usr/bin/pgrep","-r","DRST","nova-compute"] Jan 23 14:36:05 crc kubenswrapper[4775]: E0123 14:36:05.891504 4775 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" podUID="bde4903d-4224-4139-a444-3c5baf78ff7b" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 23 14:36:09 crc kubenswrapper[4775]: I0123 14:36:09.853776 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell0-conductor-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.382923 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc"] Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.384531 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.387498 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-config-data" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.390476 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-scripts" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.397476 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc"] Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.518919 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.518960 4775 generic.go:334] "Generic (PLEG): container finished" podID="bde4903d-4224-4139-a444-3c5baf78ff7b" containerID="0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" exitCode=137 Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.518994 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"bde4903d-4224-4139-a444-3c5baf78ff7b","Type":"ContainerDied","Data":"0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76"} Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.519021 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" event={"ID":"bde4903d-4224-4139-a444-3c5baf78ff7b","Type":"ContainerDied","Data":"6eb0a59b18194a13bbf978de13cdca6d55273f8b0946c59e7a3ffc58619e5617"} Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.519041 4775 scope.go:117] "RemoveContainer" containerID="0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.536912 4775 scope.go:117] "RemoveContainer" containerID="fb284da39186f2ab9d4d50e0c08df4cb63745374c070a74a4239a3a6536ab15f" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.537077 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:36:10 crc kubenswrapper[4775]: E0123 14:36:10.537523 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bde4903d-4224-4139-a444-3c5baf78ff7b" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.537549 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="bde4903d-4224-4139-a444-3c5baf78ff7b" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 23 14:36:10 crc kubenswrapper[4775]: E0123 14:36:10.537569 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bde4903d-4224-4139-a444-3c5baf78ff7b" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.537578 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="bde4903d-4224-4139-a444-3c5baf78ff7b" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 23 14:36:10 crc kubenswrapper[4775]: E0123 14:36:10.537599 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bde4903d-4224-4139-a444-3c5baf78ff7b" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.537610 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="bde4903d-4224-4139-a444-3c5baf78ff7b" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.537865 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="bde4903d-4224-4139-a444-3c5baf78ff7b" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.537909 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="bde4903d-4224-4139-a444-3c5baf78ff7b" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.537931 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="bde4903d-4224-4139-a444-3c5baf78ff7b" containerName="nova-kuttl-cell1-compute-fake1-compute-compute" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.538658 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.541116 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.556727 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.572053 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rxc4\" (UniqueName: \"kubernetes.io/projected/a194a858-8c18-41e1-9a10-428397753ece-kube-api-access-2rxc4\") pod \"nova-kuttl-cell0-cell-mapping-qxjlc\" (UID: \"a194a858-8c18-41e1-9a10-428397753ece\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.572102 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a194a858-8c18-41e1-9a10-428397753ece-scripts\") pod \"nova-kuttl-cell0-cell-mapping-qxjlc\" (UID: \"a194a858-8c18-41e1-9a10-428397753ece\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.572200 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a194a858-8c18-41e1-9a10-428397753ece-config-data\") pod \"nova-kuttl-cell0-cell-mapping-qxjlc\" (UID: \"a194a858-8c18-41e1-9a10-428397753ece\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.580439 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.581514 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.584512 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-novncproxy-config-data" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.606972 4775 scope.go:117] "RemoveContainer" containerID="0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" Jan 23 14:36:10 crc kubenswrapper[4775]: E0123 14:36:10.608677 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76\": container with ID starting with 0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76 not found: ID does not exist" containerID="0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.608710 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76"} err="failed to get container status \"0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76\": rpc error: code = NotFound desc = could not find container \"0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76\": container with ID starting with 0e2bce06ce997801980d023e8a893d8147e6bf68be23888efe032c6315cccd76 not found: ID does not exist" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.608739 4775 scope.go:117] "RemoveContainer" containerID="fb284da39186f2ab9d4d50e0c08df4cb63745374c070a74a4239a3a6536ab15f" Jan 23 14:36:10 crc kubenswrapper[4775]: E0123 14:36:10.610711 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb284da39186f2ab9d4d50e0c08df4cb63745374c070a74a4239a3a6536ab15f\": container with ID starting with fb284da39186f2ab9d4d50e0c08df4cb63745374c070a74a4239a3a6536ab15f not found: ID does not exist" containerID="fb284da39186f2ab9d4d50e0c08df4cb63745374c070a74a4239a3a6536ab15f" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.610774 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb284da39186f2ab9d4d50e0c08df4cb63745374c070a74a4239a3a6536ab15f"} err="failed to get container status \"fb284da39186f2ab9d4d50e0c08df4cb63745374c070a74a4239a3a6536ab15f\": rpc error: code = NotFound desc = could not find container \"fb284da39186f2ab9d4d50e0c08df4cb63745374c070a74a4239a3a6536ab15f\": container with ID starting with fb284da39186f2ab9d4d50e0c08df4cb63745374c070a74a4239a3a6536ab15f not found: ID does not exist" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.658485 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.673220 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bde4903d-4224-4139-a444-3c5baf78ff7b-config-data\") pod \"bde4903d-4224-4139-a444-3c5baf78ff7b\" (UID: \"bde4903d-4224-4139-a444-3c5baf78ff7b\") " Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.673344 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgvjr\" (UniqueName: \"kubernetes.io/projected/bde4903d-4224-4139-a444-3c5baf78ff7b-kube-api-access-qgvjr\") pod \"bde4903d-4224-4139-a444-3c5baf78ff7b\" (UID: \"bde4903d-4224-4139-a444-3c5baf78ff7b\") " Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.673610 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rxc4\" (UniqueName: \"kubernetes.io/projected/a194a858-8c18-41e1-9a10-428397753ece-kube-api-access-2rxc4\") pod \"nova-kuttl-cell0-cell-mapping-qxjlc\" (UID: \"a194a858-8c18-41e1-9a10-428397753ece\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.673651 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a194a858-8c18-41e1-9a10-428397753ece-scripts\") pod \"nova-kuttl-cell0-cell-mapping-qxjlc\" (UID: \"a194a858-8c18-41e1-9a10-428397753ece\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.673674 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdgzh\" (UniqueName: \"kubernetes.io/projected/e2f66b57-925b-4d68-9917-77fded405cfd-kube-api-access-qdgzh\") pod \"nova-kuttl-scheduler-0\" (UID: \"e2f66b57-925b-4d68-9917-77fded405cfd\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.673697 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a194a858-8c18-41e1-9a10-428397753ece-config-data\") pod \"nova-kuttl-cell0-cell-mapping-qxjlc\" (UID: \"a194a858-8c18-41e1-9a10-428397753ece\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.673760 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2f66b57-925b-4d68-9917-77fded405cfd-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"e2f66b57-925b-4d68-9917-77fded405cfd\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.681906 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bde4903d-4224-4139-a444-3c5baf78ff7b-kube-api-access-qgvjr" (OuterVolumeSpecName: "kube-api-access-qgvjr") pod "bde4903d-4224-4139-a444-3c5baf78ff7b" (UID: "bde4903d-4224-4139-a444-3c5baf78ff7b"). InnerVolumeSpecName "kube-api-access-qgvjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.688593 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a194a858-8c18-41e1-9a10-428397753ece-scripts\") pod \"nova-kuttl-cell0-cell-mapping-qxjlc\" (UID: \"a194a858-8c18-41e1-9a10-428397753ece\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.697711 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a194a858-8c18-41e1-9a10-428397753ece-config-data\") pod \"nova-kuttl-cell0-cell-mapping-qxjlc\" (UID: \"a194a858-8c18-41e1-9a10-428397753ece\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.702356 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rxc4\" (UniqueName: \"kubernetes.io/projected/a194a858-8c18-41e1-9a10-428397753ece-kube-api-access-2rxc4\") pod \"nova-kuttl-cell0-cell-mapping-qxjlc\" (UID: \"a194a858-8c18-41e1-9a10-428397753ece\") " pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.709844 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.711390 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.714145 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.728258 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.736058 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bde4903d-4224-4139-a444-3c5baf78ff7b-config-data" (OuterVolumeSpecName: "config-data") pod "bde4903d-4224-4139-a444-3c5baf78ff7b" (UID: "bde4903d-4224-4139-a444-3c5baf78ff7b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.742348 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.756170 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.757510 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.759556 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.764300 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.775082 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2f66b57-925b-4d68-9917-77fded405cfd-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"e2f66b57-925b-4d68-9917-77fded405cfd\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.775169 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdgzh\" (UniqueName: \"kubernetes.io/projected/e2f66b57-925b-4d68-9917-77fded405cfd-kube-api-access-qdgzh\") pod \"nova-kuttl-scheduler-0\" (UID: \"e2f66b57-925b-4d68-9917-77fded405cfd\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.775198 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb15b357-f464-4e43-a038-3b9e72455d49-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"cb15b357-f464-4e43-a038-3b9e72455d49\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.775233 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpmmj\" (UniqueName: \"kubernetes.io/projected/cb15b357-f464-4e43-a038-3b9e72455d49-kube-api-access-gpmmj\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"cb15b357-f464-4e43-a038-3b9e72455d49\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.775288 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bde4903d-4224-4139-a444-3c5baf78ff7b-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.775300 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgvjr\" (UniqueName: \"kubernetes.io/projected/bde4903d-4224-4139-a444-3c5baf78ff7b-kube-api-access-qgvjr\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.778642 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2f66b57-925b-4d68-9917-77fded405cfd-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"e2f66b57-925b-4d68-9917-77fded405cfd\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.804192 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdgzh\" (UniqueName: \"kubernetes.io/projected/e2f66b57-925b-4d68-9917-77fded405cfd-kube-api-access-qdgzh\") pod \"nova-kuttl-scheduler-0\" (UID: \"e2f66b57-925b-4d68-9917-77fded405cfd\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.881084 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b-config-data\") pod \"nova-kuttl-api-0\" (UID: \"54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.881153 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfbgh\" (UniqueName: \"kubernetes.io/projected/25188693-7059-4db5-88d0-6e36d8d2d4ed-kube-api-access-xfbgh\") pod \"nova-kuttl-metadata-0\" (UID: \"25188693-7059-4db5-88d0-6e36d8d2d4ed\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.882905 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb15b357-f464-4e43-a038-3b9e72455d49-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"cb15b357-f464-4e43-a038-3b9e72455d49\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.886895 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25188693-7059-4db5-88d0-6e36d8d2d4ed-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"25188693-7059-4db5-88d0-6e36d8d2d4ed\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.886963 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4mwg\" (UniqueName: \"kubernetes.io/projected/54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b-kube-api-access-p4mwg\") pod \"nova-kuttl-api-0\" (UID: \"54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.886995 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpmmj\" (UniqueName: \"kubernetes.io/projected/cb15b357-f464-4e43-a038-3b9e72455d49-kube-api-access-gpmmj\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"cb15b357-f464-4e43-a038-3b9e72455d49\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.887052 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b-logs\") pod \"nova-kuttl-api-0\" (UID: \"54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.887091 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25188693-7059-4db5-88d0-6e36d8d2d4ed-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"25188693-7059-4db5-88d0-6e36d8d2d4ed\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.888992 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb15b357-f464-4e43-a038-3b9e72455d49-config-data\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"cb15b357-f464-4e43-a038-3b9e72455d49\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.905242 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpmmj\" (UniqueName: \"kubernetes.io/projected/cb15b357-f464-4e43-a038-3b9e72455d49-kube-api-access-gpmmj\") pod \"nova-kuttl-cell1-novncproxy-0\" (UID: \"cb15b357-f464-4e43-a038-3b9e72455d49\") " pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.906516 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.950203 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.989200 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b-config-data\") pod \"nova-kuttl-api-0\" (UID: \"54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.989248 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfbgh\" (UniqueName: \"kubernetes.io/projected/25188693-7059-4db5-88d0-6e36d8d2d4ed-kube-api-access-xfbgh\") pod \"nova-kuttl-metadata-0\" (UID: \"25188693-7059-4db5-88d0-6e36d8d2d4ed\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.989274 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25188693-7059-4db5-88d0-6e36d8d2d4ed-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"25188693-7059-4db5-88d0-6e36d8d2d4ed\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.989300 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4mwg\" (UniqueName: \"kubernetes.io/projected/54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b-kube-api-access-p4mwg\") pod \"nova-kuttl-api-0\" (UID: \"54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.989331 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b-logs\") pod \"nova-kuttl-api-0\" (UID: \"54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.989359 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25188693-7059-4db5-88d0-6e36d8d2d4ed-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"25188693-7059-4db5-88d0-6e36d8d2d4ed\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.991325 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b-logs\") pod \"nova-kuttl-api-0\" (UID: \"54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.992010 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25188693-7059-4db5-88d0-6e36d8d2d4ed-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"25188693-7059-4db5-88d0-6e36d8d2d4ed\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.994049 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25188693-7059-4db5-88d0-6e36d8d2d4ed-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"25188693-7059-4db5-88d0-6e36d8d2d4ed\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:10 crc kubenswrapper[4775]: I0123 14:36:10.994576 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b-config-data\") pod \"nova-kuttl-api-0\" (UID: \"54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.005094 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4mwg\" (UniqueName: \"kubernetes.io/projected/54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b-kube-api-access-p4mwg\") pod \"nova-kuttl-api-0\" (UID: \"54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.005657 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfbgh\" (UniqueName: \"kubernetes.io/projected/25188693-7059-4db5-88d0-6e36d8d2d4ed-kube-api-access-xfbgh\") pod \"nova-kuttl-metadata-0\" (UID: \"25188693-7059-4db5-88d0-6e36d8d2d4ed\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.032345 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.072287 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.187065 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc"] Jan 23 14:36:11 crc kubenswrapper[4775]: W0123 14:36:11.196350 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda194a858_8c18_41e1_9a10_428397753ece.slice/crio-540d7fc621bfb77baee6f4dc9b9760f31678a0d97998700432ed6c76b8808f94 WatchSource:0}: Error finding container 540d7fc621bfb77baee6f4dc9b9760f31678a0d97998700432ed6c76b8808f94: Status 404 returned error can't find the container with id 540d7fc621bfb77baee6f4dc9b9760f31678a0d97998700432ed6c76b8808f94 Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.271092 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r"] Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.272250 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r" Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.281026 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.281246 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-scripts" Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.281317 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r"] Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.292161 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvhbc\" (UniqueName: \"kubernetes.io/projected/263d2fcc-c533-4291-8e78-d8e9a2ee2894-kube-api-access-xvhbc\") pod \"nova-kuttl-cell1-conductor-db-sync-sjz5r\" (UID: \"263d2fcc-c533-4291-8e78-d8e9a2ee2894\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r" Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.292223 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/263d2fcc-c533-4291-8e78-d8e9a2ee2894-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-sjz5r\" (UID: \"263d2fcc-c533-4291-8e78-d8e9a2ee2894\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r" Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.292310 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/263d2fcc-c533-4291-8e78-d8e9a2ee2894-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-sjz5r\" (UID: \"263d2fcc-c533-4291-8e78-d8e9a2ee2894\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r" Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.314772 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.393141 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/263d2fcc-c533-4291-8e78-d8e9a2ee2894-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-sjz5r\" (UID: \"263d2fcc-c533-4291-8e78-d8e9a2ee2894\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r" Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.393218 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvhbc\" (UniqueName: \"kubernetes.io/projected/263d2fcc-c533-4291-8e78-d8e9a2ee2894-kube-api-access-xvhbc\") pod \"nova-kuttl-cell1-conductor-db-sync-sjz5r\" (UID: \"263d2fcc-c533-4291-8e78-d8e9a2ee2894\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r" Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.393265 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/263d2fcc-c533-4291-8e78-d8e9a2ee2894-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-sjz5r\" (UID: \"263d2fcc-c533-4291-8e78-d8e9a2ee2894\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r" Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.398419 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/263d2fcc-c533-4291-8e78-d8e9a2ee2894-scripts\") pod \"nova-kuttl-cell1-conductor-db-sync-sjz5r\" (UID: \"263d2fcc-c533-4291-8e78-d8e9a2ee2894\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r" Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.398932 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/263d2fcc-c533-4291-8e78-d8e9a2ee2894-config-data\") pod \"nova-kuttl-cell1-conductor-db-sync-sjz5r\" (UID: \"263d2fcc-c533-4291-8e78-d8e9a2ee2894\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r" Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.410399 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvhbc\" (UniqueName: \"kubernetes.io/projected/263d2fcc-c533-4291-8e78-d8e9a2ee2894-kube-api-access-xvhbc\") pod \"nova-kuttl-cell1-conductor-db-sync-sjz5r\" (UID: \"263d2fcc-c533-4291-8e78-d8e9a2ee2894\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r" Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.429905 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-novncproxy-0"] Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.531184 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.534438 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"cb15b357-f464-4e43-a038-3b9e72455d49","Type":"ContainerStarted","Data":"9cfe823b908c6c40ff233692171c40835d933d01141f24e68717f0715b55d84e"} Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.535560 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0" Jan 23 14:36:11 crc kubenswrapper[4775]: W0123 14:36:11.535584 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54fdd9aa_36fd_4817_9902_8e9e7c4d6f2b.slice/crio-c41817894212c3f14aaa33f7a5882a3305e520c3835b174c4049ab7f3bdb2ab3 WatchSource:0}: Error finding container c41817894212c3f14aaa33f7a5882a3305e520c3835b174c4049ab7f3bdb2ab3: Status 404 returned error can't find the container with id c41817894212c3f14aaa33f7a5882a3305e520c3835b174c4049ab7f3bdb2ab3 Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.536687 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"e2f66b57-925b-4d68-9917-77fded405cfd","Type":"ContainerStarted","Data":"2b8e3f238a5f79ab38d936a701163189da7e99d755c73a0f5f5797f13ecc3f18"} Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.536719 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"e2f66b57-925b-4d68-9917-77fded405cfd","Type":"ContainerStarted","Data":"c2b7b80f0829ed1270ce5358668a91d6a85cf45703f80218b39e2c994e384bba"} Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.538479 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc" event={"ID":"a194a858-8c18-41e1-9a10-428397753ece","Type":"ContainerStarted","Data":"8d06597f807e3e42864d38d837f7984e31d4d87d055c7ea7bb57e3bf624b9c80"} Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.538509 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc" event={"ID":"a194a858-8c18-41e1-9a10-428397753ece","Type":"ContainerStarted","Data":"540d7fc621bfb77baee6f4dc9b9760f31678a0d97998700432ed6c76b8808f94"} Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.540100 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.562166 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc" podStartSLOduration=1.562147885 podStartE2EDuration="1.562147885s" podCreationTimestamp="2026-01-23 14:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:36:11.551127938 +0000 UTC m=+1918.545956688" watchObservedRunningTime="2026-01-23 14:36:11.562147885 +0000 UTC m=+1918.556976625" Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.574171 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=1.5741573180000001 podStartE2EDuration="1.574157318s" podCreationTimestamp="2026-01-23 14:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:36:11.566156093 +0000 UTC m=+1918.560984833" watchObservedRunningTime="2026-01-23 14:36:11.574157318 +0000 UTC m=+1918.568986058" Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.592098 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.598683 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-compute-fake1-compute-0"] Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.653163 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r" Jan 23 14:36:11 crc kubenswrapper[4775]: I0123 14:36:11.749593 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bde4903d-4224-4139-a444-3c5baf78ff7b" path="/var/lib/kubelet/pods/bde4903d-4224-4139-a444-3c5baf78ff7b/volumes" Jan 23 14:36:12 crc kubenswrapper[4775]: I0123 14:36:12.066956 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r"] Jan 23 14:36:12 crc kubenswrapper[4775]: W0123 14:36:12.069728 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod263d2fcc_c533_4291_8e78_d8e9a2ee2894.slice/crio-8fc9003e231bab5ab28b1915938e5b790cbef806227566e21ee1a940eb1ba261 WatchSource:0}: Error finding container 8fc9003e231bab5ab28b1915938e5b790cbef806227566e21ee1a940eb1ba261: Status 404 returned error can't find the container with id 8fc9003e231bab5ab28b1915938e5b790cbef806227566e21ee1a940eb1ba261 Jan 23 14:36:12 crc kubenswrapper[4775]: I0123 14:36:12.575486 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b","Type":"ContainerStarted","Data":"39f388423202ed75f16d75187c19f2d2cdf7e8455442dc2b410cc35e7612ffd8"} Jan 23 14:36:12 crc kubenswrapper[4775]: I0123 14:36:12.575781 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b","Type":"ContainerStarted","Data":"3e4fa5e890db1728e38a13f619b052d6364d5c42b60316181ebe5a82817fbb1d"} Jan 23 14:36:12 crc kubenswrapper[4775]: I0123 14:36:12.575795 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b","Type":"ContainerStarted","Data":"c41817894212c3f14aaa33f7a5882a3305e520c3835b174c4049ab7f3bdb2ab3"} Jan 23 14:36:12 crc kubenswrapper[4775]: I0123 14:36:12.579333 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"25188693-7059-4db5-88d0-6e36d8d2d4ed","Type":"ContainerStarted","Data":"5eb04509896cf0a0925ed7ffef7304c549ef9c0c4057c97f15e08a17025a2d85"} Jan 23 14:36:12 crc kubenswrapper[4775]: I0123 14:36:12.579372 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"25188693-7059-4db5-88d0-6e36d8d2d4ed","Type":"ContainerStarted","Data":"a8cf445d0557f4b8d02802ff384739d41a1d6499fb5910e762bfaffa0d95deda"} Jan 23 14:36:12 crc kubenswrapper[4775]: I0123 14:36:12.579395 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"25188693-7059-4db5-88d0-6e36d8d2d4ed","Type":"ContainerStarted","Data":"673864771c4790e0938c1c15347461b29921695eaa3a3c61f8309e7d406615cd"} Jan 23 14:36:12 crc kubenswrapper[4775]: I0123 14:36:12.586434 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r" event={"ID":"263d2fcc-c533-4291-8e78-d8e9a2ee2894","Type":"ContainerStarted","Data":"10368cb00c51c9c09d42987a704f6c282da205a1023667df771174ceb21b2b54"} Jan 23 14:36:12 crc kubenswrapper[4775]: I0123 14:36:12.586461 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r" event={"ID":"263d2fcc-c533-4291-8e78-d8e9a2ee2894","Type":"ContainerStarted","Data":"8fc9003e231bab5ab28b1915938e5b790cbef806227566e21ee1a940eb1ba261"} Jan 23 14:36:12 crc kubenswrapper[4775]: I0123 14:36:12.590357 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" event={"ID":"cb15b357-f464-4e43-a038-3b9e72455d49","Type":"ContainerStarted","Data":"85f6d20aed8bcbed2caea1d3221d7e598f335f036c48d01305911f6951677f7d"} Jan 23 14:36:12 crc kubenswrapper[4775]: I0123 14:36:12.608624 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.6086030300000003 podStartE2EDuration="2.60860303s" podCreationTimestamp="2026-01-23 14:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:36:12.600379538 +0000 UTC m=+1919.595208288" watchObservedRunningTime="2026-01-23 14:36:12.60860303 +0000 UTC m=+1919.603431780" Jan 23 14:36:12 crc kubenswrapper[4775]: I0123 14:36:12.635925 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" podStartSLOduration=2.635894575 podStartE2EDuration="2.635894575s" podCreationTimestamp="2026-01-23 14:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:36:12.624546079 +0000 UTC m=+1919.619374849" watchObservedRunningTime="2026-01-23 14:36:12.635894575 +0000 UTC m=+1919.630723335" Jan 23 14:36:12 crc kubenswrapper[4775]: I0123 14:36:12.649179 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=2.649151222 podStartE2EDuration="2.649151222s" podCreationTimestamp="2026-01-23 14:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:36:12.642699268 +0000 UTC m=+1919.637528248" watchObservedRunningTime="2026-01-23 14:36:12.649151222 +0000 UTC m=+1919.643979982" Jan 23 14:36:12 crc kubenswrapper[4775]: I0123 14:36:12.667427 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r" podStartSLOduration=1.667399644 podStartE2EDuration="1.667399644s" podCreationTimestamp="2026-01-23 14:36:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:36:12.664930097 +0000 UTC m=+1919.659758837" watchObservedRunningTime="2026-01-23 14:36:12.667399644 +0000 UTC m=+1919.662228394" Jan 23 14:36:15 crc kubenswrapper[4775]: I0123 14:36:15.625761 4775 generic.go:334] "Generic (PLEG): container finished" podID="263d2fcc-c533-4291-8e78-d8e9a2ee2894" containerID="10368cb00c51c9c09d42987a704f6c282da205a1023667df771174ceb21b2b54" exitCode=0 Jan 23 14:36:15 crc kubenswrapper[4775]: I0123 14:36:15.626050 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r" event={"ID":"263d2fcc-c533-4291-8e78-d8e9a2ee2894","Type":"ContainerDied","Data":"10368cb00c51c9c09d42987a704f6c282da205a1023667df771174ceb21b2b54"} Jan 23 14:36:15 crc kubenswrapper[4775]: I0123 14:36:15.636433 4775 generic.go:334] "Generic (PLEG): container finished" podID="a194a858-8c18-41e1-9a10-428397753ece" containerID="8d06597f807e3e42864d38d837f7984e31d4d87d055c7ea7bb57e3bf624b9c80" exitCode=0 Jan 23 14:36:15 crc kubenswrapper[4775]: I0123 14:36:15.637116 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc" event={"ID":"a194a858-8c18-41e1-9a10-428397753ece","Type":"ContainerDied","Data":"8d06597f807e3e42864d38d837f7984e31d4d87d055c7ea7bb57e3bf624b9c80"} Jan 23 14:36:15 crc kubenswrapper[4775]: I0123 14:36:15.907875 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:15 crc kubenswrapper[4775]: I0123 14:36:15.951408 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:36:16 crc kubenswrapper[4775]: I0123 14:36:16.033213 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:16 crc kubenswrapper[4775]: I0123 14:36:16.033349 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.134100 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.141598 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.314586 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rxc4\" (UniqueName: \"kubernetes.io/projected/a194a858-8c18-41e1-9a10-428397753ece-kube-api-access-2rxc4\") pod \"a194a858-8c18-41e1-9a10-428397753ece\" (UID: \"a194a858-8c18-41e1-9a10-428397753ece\") " Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.315079 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/263d2fcc-c533-4291-8e78-d8e9a2ee2894-config-data\") pod \"263d2fcc-c533-4291-8e78-d8e9a2ee2894\" (UID: \"263d2fcc-c533-4291-8e78-d8e9a2ee2894\") " Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.315161 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/263d2fcc-c533-4291-8e78-d8e9a2ee2894-scripts\") pod \"263d2fcc-c533-4291-8e78-d8e9a2ee2894\" (UID: \"263d2fcc-c533-4291-8e78-d8e9a2ee2894\") " Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.315296 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a194a858-8c18-41e1-9a10-428397753ece-scripts\") pod \"a194a858-8c18-41e1-9a10-428397753ece\" (UID: \"a194a858-8c18-41e1-9a10-428397753ece\") " Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.315333 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a194a858-8c18-41e1-9a10-428397753ece-config-data\") pod \"a194a858-8c18-41e1-9a10-428397753ece\" (UID: \"a194a858-8c18-41e1-9a10-428397753ece\") " Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.315392 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvhbc\" (UniqueName: \"kubernetes.io/projected/263d2fcc-c533-4291-8e78-d8e9a2ee2894-kube-api-access-xvhbc\") pod \"263d2fcc-c533-4291-8e78-d8e9a2ee2894\" (UID: \"263d2fcc-c533-4291-8e78-d8e9a2ee2894\") " Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.320602 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/263d2fcc-c533-4291-8e78-d8e9a2ee2894-scripts" (OuterVolumeSpecName: "scripts") pod "263d2fcc-c533-4291-8e78-d8e9a2ee2894" (UID: "263d2fcc-c533-4291-8e78-d8e9a2ee2894"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.320896 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a194a858-8c18-41e1-9a10-428397753ece-scripts" (OuterVolumeSpecName: "scripts") pod "a194a858-8c18-41e1-9a10-428397753ece" (UID: "a194a858-8c18-41e1-9a10-428397753ece"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.322108 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/263d2fcc-c533-4291-8e78-d8e9a2ee2894-kube-api-access-xvhbc" (OuterVolumeSpecName: "kube-api-access-xvhbc") pod "263d2fcc-c533-4291-8e78-d8e9a2ee2894" (UID: "263d2fcc-c533-4291-8e78-d8e9a2ee2894"). InnerVolumeSpecName "kube-api-access-xvhbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.322134 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a194a858-8c18-41e1-9a10-428397753ece-kube-api-access-2rxc4" (OuterVolumeSpecName: "kube-api-access-2rxc4") pod "a194a858-8c18-41e1-9a10-428397753ece" (UID: "a194a858-8c18-41e1-9a10-428397753ece"). InnerVolumeSpecName "kube-api-access-2rxc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.340687 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/263d2fcc-c533-4291-8e78-d8e9a2ee2894-config-data" (OuterVolumeSpecName: "config-data") pod "263d2fcc-c533-4291-8e78-d8e9a2ee2894" (UID: "263d2fcc-c533-4291-8e78-d8e9a2ee2894"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.355334 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a194a858-8c18-41e1-9a10-428397753ece-config-data" (OuterVolumeSpecName: "config-data") pod "a194a858-8c18-41e1-9a10-428397753ece" (UID: "a194a858-8c18-41e1-9a10-428397753ece"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.417472 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/263d2fcc-c533-4291-8e78-d8e9a2ee2894-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.417535 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a194a858-8c18-41e1-9a10-428397753ece-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.417554 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a194a858-8c18-41e1-9a10-428397753ece-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.417573 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvhbc\" (UniqueName: \"kubernetes.io/projected/263d2fcc-c533-4291-8e78-d8e9a2ee2894-kube-api-access-xvhbc\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.417595 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rxc4\" (UniqueName: \"kubernetes.io/projected/a194a858-8c18-41e1-9a10-428397753ece-kube-api-access-2rxc4\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.417612 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/263d2fcc-c533-4291-8e78-d8e9a2ee2894-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.660265 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc" event={"ID":"a194a858-8c18-41e1-9a10-428397753ece","Type":"ContainerDied","Data":"540d7fc621bfb77baee6f4dc9b9760f31678a0d97998700432ed6c76b8808f94"} Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.660289 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.660421 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="540d7fc621bfb77baee6f4dc9b9760f31678a0d97998700432ed6c76b8808f94" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.662931 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r" event={"ID":"263d2fcc-c533-4291-8e78-d8e9a2ee2894","Type":"ContainerDied","Data":"8fc9003e231bab5ab28b1915938e5b790cbef806227566e21ee1a940eb1ba261"} Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.662979 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fc9003e231bab5ab28b1915938e5b790cbef806227566e21ee1a940eb1ba261" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.663082 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.767376 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:36:17 crc kubenswrapper[4775]: E0123 14:36:17.767707 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="263d2fcc-c533-4291-8e78-d8e9a2ee2894" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.767724 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="263d2fcc-c533-4291-8e78-d8e9a2ee2894" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 23 14:36:17 crc kubenswrapper[4775]: E0123 14:36:17.767758 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a194a858-8c18-41e1-9a10-428397753ece" containerName="nova-manage" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.767767 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="a194a858-8c18-41e1-9a10-428397753ece" containerName="nova-manage" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.767967 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="a194a858-8c18-41e1-9a10-428397753ece" containerName="nova-manage" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.767981 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="263d2fcc-c533-4291-8e78-d8e9a2ee2894" containerName="nova-kuttl-cell1-conductor-db-sync" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.768577 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.772896 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-conductor-config-data" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.824285 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.923966 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.924295 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b" containerName="nova-kuttl-api-log" containerID="cri-o://3e4fa5e890db1728e38a13f619b052d6364d5c42b60316181ebe5a82817fbb1d" gracePeriod=30 Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.924609 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b" containerName="nova-kuttl-api-api" containerID="cri-o://39f388423202ed75f16d75187c19f2d2cdf7e8455442dc2b410cc35e7612ffd8" gracePeriod=30 Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.925910 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gkk8\" (UniqueName: \"kubernetes.io/projected/1fd448a3-6897-490f-9c92-98590cee53ca-kube-api-access-2gkk8\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"1fd448a3-6897-490f-9c92-98590cee53ca\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.926046 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fd448a3-6897-490f-9c92-98590cee53ca-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"1fd448a3-6897-490f-9c92-98590cee53ca\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.938718 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:36:17 crc kubenswrapper[4775]: I0123 14:36:17.939272 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="e2f66b57-925b-4d68-9917-77fded405cfd" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://2b8e3f238a5f79ab38d936a701163189da7e99d755c73a0f5f5797f13ecc3f18" gracePeriod=30 Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.021935 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.022181 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="25188693-7059-4db5-88d0-6e36d8d2d4ed" containerName="nova-kuttl-metadata-log" containerID="cri-o://a8cf445d0557f4b8d02802ff384739d41a1d6499fb5910e762bfaffa0d95deda" gracePeriod=30 Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.022325 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="25188693-7059-4db5-88d0-6e36d8d2d4ed" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://5eb04509896cf0a0925ed7ffef7304c549ef9c0c4057c97f15e08a17025a2d85" gracePeriod=30 Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.027478 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fd448a3-6897-490f-9c92-98590cee53ca-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"1fd448a3-6897-490f-9c92-98590cee53ca\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.027574 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gkk8\" (UniqueName: \"kubernetes.io/projected/1fd448a3-6897-490f-9c92-98590cee53ca-kube-api-access-2gkk8\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"1fd448a3-6897-490f-9c92-98590cee53ca\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.036191 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1fd448a3-6897-490f-9c92-98590cee53ca-config-data\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"1fd448a3-6897-490f-9c92-98590cee53ca\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.046432 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gkk8\" (UniqueName: \"kubernetes.io/projected/1fd448a3-6897-490f-9c92-98590cee53ca-kube-api-access-2gkk8\") pod \"nova-kuttl-cell1-conductor-0\" (UID: \"1fd448a3-6897-490f-9c92-98590cee53ca\") " pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.138212 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.432359 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.534658 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b-logs\") pod \"54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b\" (UID: \"54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b\") " Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.535653 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b-config-data\") pod \"54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b\" (UID: \"54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b\") " Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.535725 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4mwg\" (UniqueName: \"kubernetes.io/projected/54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b-kube-api-access-p4mwg\") pod \"54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b\" (UID: \"54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b\") " Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.535458 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b-logs" (OuterVolumeSpecName: "logs") pod "54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b" (UID: "54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.535968 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.539777 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b-kube-api-access-p4mwg" (OuterVolumeSpecName: "kube-api-access-p4mwg") pod "54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b" (UID: "54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b"). InnerVolumeSpecName "kube-api-access-p4mwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.563028 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b-config-data" (OuterVolumeSpecName: "config-data") pod "54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b" (UID: "54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.588689 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.639134 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.639191 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4mwg\" (UniqueName: \"kubernetes.io/projected/54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b-kube-api-access-p4mwg\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.643334 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-0"] Jan 23 14:36:18 crc kubenswrapper[4775]: W0123 14:36:18.646735 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fd448a3_6897_490f_9c92_98590cee53ca.slice/crio-fd23cb47fab31b28e20c4e94779f6661d78bb8dc7769c7349d365bad01d35d17 WatchSource:0}: Error finding container fd23cb47fab31b28e20c4e94779f6661d78bb8dc7769c7349d365bad01d35d17: Status 404 returned error can't find the container with id fd23cb47fab31b28e20c4e94779f6661d78bb8dc7769c7349d365bad01d35d17 Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.671968 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"1fd448a3-6897-490f-9c92-98590cee53ca","Type":"ContainerStarted","Data":"fd23cb47fab31b28e20c4e94779f6661d78bb8dc7769c7349d365bad01d35d17"} Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.673308 4775 generic.go:334] "Generic (PLEG): container finished" podID="54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b" containerID="39f388423202ed75f16d75187c19f2d2cdf7e8455442dc2b410cc35e7612ffd8" exitCode=0 Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.673329 4775 generic.go:334] "Generic (PLEG): container finished" podID="54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b" containerID="3e4fa5e890db1728e38a13f619b052d6364d5c42b60316181ebe5a82817fbb1d" exitCode=143 Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.673356 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b","Type":"ContainerDied","Data":"39f388423202ed75f16d75187c19f2d2cdf7e8455442dc2b410cc35e7612ffd8"} Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.673372 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b","Type":"ContainerDied","Data":"3e4fa5e890db1728e38a13f619b052d6364d5c42b60316181ebe5a82817fbb1d"} Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.673382 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b","Type":"ContainerDied","Data":"c41817894212c3f14aaa33f7a5882a3305e520c3835b174c4049ab7f3bdb2ab3"} Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.673396 4775 scope.go:117] "RemoveContainer" containerID="39f388423202ed75f16d75187c19f2d2cdf7e8455442dc2b410cc35e7612ffd8" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.673497 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.679342 4775 generic.go:334] "Generic (PLEG): container finished" podID="e2f66b57-925b-4d68-9917-77fded405cfd" containerID="2b8e3f238a5f79ab38d936a701163189da7e99d755c73a0f5f5797f13ecc3f18" exitCode=0 Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.679416 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"e2f66b57-925b-4d68-9917-77fded405cfd","Type":"ContainerDied","Data":"2b8e3f238a5f79ab38d936a701163189da7e99d755c73a0f5f5797f13ecc3f18"} Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.681514 4775 generic.go:334] "Generic (PLEG): container finished" podID="25188693-7059-4db5-88d0-6e36d8d2d4ed" containerID="5eb04509896cf0a0925ed7ffef7304c549ef9c0c4057c97f15e08a17025a2d85" exitCode=0 Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.681536 4775 generic.go:334] "Generic (PLEG): container finished" podID="25188693-7059-4db5-88d0-6e36d8d2d4ed" containerID="a8cf445d0557f4b8d02802ff384739d41a1d6499fb5910e762bfaffa0d95deda" exitCode=143 Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.681550 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"25188693-7059-4db5-88d0-6e36d8d2d4ed","Type":"ContainerDied","Data":"5eb04509896cf0a0925ed7ffef7304c549ef9c0c4057c97f15e08a17025a2d85"} Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.681564 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"25188693-7059-4db5-88d0-6e36d8d2d4ed","Type":"ContainerDied","Data":"a8cf445d0557f4b8d02802ff384739d41a1d6499fb5910e762bfaffa0d95deda"} Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.681574 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"25188693-7059-4db5-88d0-6e36d8d2d4ed","Type":"ContainerDied","Data":"673864771c4790e0938c1c15347461b29921695eaa3a3c61f8309e7d406615cd"} Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.681652 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.711909 4775 scope.go:117] "RemoveContainer" containerID="3e4fa5e890db1728e38a13f619b052d6364d5c42b60316181ebe5a82817fbb1d" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.717293 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.727942 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.736247 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:36:18 crc kubenswrapper[4775]: E0123 14:36:18.736646 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25188693-7059-4db5-88d0-6e36d8d2d4ed" containerName="nova-kuttl-metadata-metadata" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.736661 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="25188693-7059-4db5-88d0-6e36d8d2d4ed" containerName="nova-kuttl-metadata-metadata" Jan 23 14:36:18 crc kubenswrapper[4775]: E0123 14:36:18.736671 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b" containerName="nova-kuttl-api-log" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.736687 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b" containerName="nova-kuttl-api-log" Jan 23 14:36:18 crc kubenswrapper[4775]: E0123 14:36:18.736698 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25188693-7059-4db5-88d0-6e36d8d2d4ed" containerName="nova-kuttl-metadata-log" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.736730 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="25188693-7059-4db5-88d0-6e36d8d2d4ed" containerName="nova-kuttl-metadata-log" Jan 23 14:36:18 crc kubenswrapper[4775]: E0123 14:36:18.736749 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b" containerName="nova-kuttl-api-api" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.736756 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b" containerName="nova-kuttl-api-api" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.736959 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b" containerName="nova-kuttl-api-log" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.736971 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="25188693-7059-4db5-88d0-6e36d8d2d4ed" containerName="nova-kuttl-metadata-metadata" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.736987 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b" containerName="nova-kuttl-api-api" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.736995 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="25188693-7059-4db5-88d0-6e36d8d2d4ed" containerName="nova-kuttl-metadata-log" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.738045 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.739824 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.740010 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfbgh\" (UniqueName: \"kubernetes.io/projected/25188693-7059-4db5-88d0-6e36d8d2d4ed-kube-api-access-xfbgh\") pod \"25188693-7059-4db5-88d0-6e36d8d2d4ed\" (UID: \"25188693-7059-4db5-88d0-6e36d8d2d4ed\") " Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.740112 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25188693-7059-4db5-88d0-6e36d8d2d4ed-config-data\") pod \"25188693-7059-4db5-88d0-6e36d8d2d4ed\" (UID: \"25188693-7059-4db5-88d0-6e36d8d2d4ed\") " Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.740227 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25188693-7059-4db5-88d0-6e36d8d2d4ed-logs\") pod \"25188693-7059-4db5-88d0-6e36d8d2d4ed\" (UID: \"25188693-7059-4db5-88d0-6e36d8d2d4ed\") " Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.740899 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25188693-7059-4db5-88d0-6e36d8d2d4ed-logs" (OuterVolumeSpecName: "logs") pod "25188693-7059-4db5-88d0-6e36d8d2d4ed" (UID: "25188693-7059-4db5-88d0-6e36d8d2d4ed"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.746582 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.746616 4775 scope.go:117] "RemoveContainer" containerID="39f388423202ed75f16d75187c19f2d2cdf7e8455442dc2b410cc35e7612ffd8" Jan 23 14:36:18 crc kubenswrapper[4775]: E0123 14:36:18.747305 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39f388423202ed75f16d75187c19f2d2cdf7e8455442dc2b410cc35e7612ffd8\": container with ID starting with 39f388423202ed75f16d75187c19f2d2cdf7e8455442dc2b410cc35e7612ffd8 not found: ID does not exist" containerID="39f388423202ed75f16d75187c19f2d2cdf7e8455442dc2b410cc35e7612ffd8" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.747337 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39f388423202ed75f16d75187c19f2d2cdf7e8455442dc2b410cc35e7612ffd8"} err="failed to get container status \"39f388423202ed75f16d75187c19f2d2cdf7e8455442dc2b410cc35e7612ffd8\": rpc error: code = NotFound desc = could not find container \"39f388423202ed75f16d75187c19f2d2cdf7e8455442dc2b410cc35e7612ffd8\": container with ID starting with 39f388423202ed75f16d75187c19f2d2cdf7e8455442dc2b410cc35e7612ffd8 not found: ID does not exist" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.747360 4775 scope.go:117] "RemoveContainer" containerID="3e4fa5e890db1728e38a13f619b052d6364d5c42b60316181ebe5a82817fbb1d" Jan 23 14:36:18 crc kubenswrapper[4775]: E0123 14:36:18.747601 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e4fa5e890db1728e38a13f619b052d6364d5c42b60316181ebe5a82817fbb1d\": container with ID starting with 3e4fa5e890db1728e38a13f619b052d6364d5c42b60316181ebe5a82817fbb1d not found: ID does not exist" containerID="3e4fa5e890db1728e38a13f619b052d6364d5c42b60316181ebe5a82817fbb1d" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.747628 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e4fa5e890db1728e38a13f619b052d6364d5c42b60316181ebe5a82817fbb1d"} err="failed to get container status \"3e4fa5e890db1728e38a13f619b052d6364d5c42b60316181ebe5a82817fbb1d\": rpc error: code = NotFound desc = could not find container \"3e4fa5e890db1728e38a13f619b052d6364d5c42b60316181ebe5a82817fbb1d\": container with ID starting with 3e4fa5e890db1728e38a13f619b052d6364d5c42b60316181ebe5a82817fbb1d not found: ID does not exist" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.747645 4775 scope.go:117] "RemoveContainer" containerID="39f388423202ed75f16d75187c19f2d2cdf7e8455442dc2b410cc35e7612ffd8" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.747851 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39f388423202ed75f16d75187c19f2d2cdf7e8455442dc2b410cc35e7612ffd8"} err="failed to get container status \"39f388423202ed75f16d75187c19f2d2cdf7e8455442dc2b410cc35e7612ffd8\": rpc error: code = NotFound desc = could not find container \"39f388423202ed75f16d75187c19f2d2cdf7e8455442dc2b410cc35e7612ffd8\": container with ID starting with 39f388423202ed75f16d75187c19f2d2cdf7e8455442dc2b410cc35e7612ffd8 not found: ID does not exist" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.747870 4775 scope.go:117] "RemoveContainer" containerID="3e4fa5e890db1728e38a13f619b052d6364d5c42b60316181ebe5a82817fbb1d" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.748101 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25188693-7059-4db5-88d0-6e36d8d2d4ed-kube-api-access-xfbgh" (OuterVolumeSpecName: "kube-api-access-xfbgh") pod "25188693-7059-4db5-88d0-6e36d8d2d4ed" (UID: "25188693-7059-4db5-88d0-6e36d8d2d4ed"). InnerVolumeSpecName "kube-api-access-xfbgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.748118 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e4fa5e890db1728e38a13f619b052d6364d5c42b60316181ebe5a82817fbb1d"} err="failed to get container status \"3e4fa5e890db1728e38a13f619b052d6364d5c42b60316181ebe5a82817fbb1d\": rpc error: code = NotFound desc = could not find container \"3e4fa5e890db1728e38a13f619b052d6364d5c42b60316181ebe5a82817fbb1d\": container with ID starting with 3e4fa5e890db1728e38a13f619b052d6364d5c42b60316181ebe5a82817fbb1d not found: ID does not exist" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.748153 4775 scope.go:117] "RemoveContainer" containerID="5eb04509896cf0a0925ed7ffef7304c549ef9c0c4057c97f15e08a17025a2d85" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.751669 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.770762 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25188693-7059-4db5-88d0-6e36d8d2d4ed-config-data" (OuterVolumeSpecName: "config-data") pod "25188693-7059-4db5-88d0-6e36d8d2d4ed" (UID: "25188693-7059-4db5-88d0-6e36d8d2d4ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.789040 4775 scope.go:117] "RemoveContainer" containerID="a8cf445d0557f4b8d02802ff384739d41a1d6499fb5910e762bfaffa0d95deda" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.815342 4775 scope.go:117] "RemoveContainer" containerID="5eb04509896cf0a0925ed7ffef7304c549ef9c0c4057c97f15e08a17025a2d85" Jan 23 14:36:18 crc kubenswrapper[4775]: E0123 14:36:18.815674 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5eb04509896cf0a0925ed7ffef7304c549ef9c0c4057c97f15e08a17025a2d85\": container with ID starting with 5eb04509896cf0a0925ed7ffef7304c549ef9c0c4057c97f15e08a17025a2d85 not found: ID does not exist" containerID="5eb04509896cf0a0925ed7ffef7304c549ef9c0c4057c97f15e08a17025a2d85" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.815706 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5eb04509896cf0a0925ed7ffef7304c549ef9c0c4057c97f15e08a17025a2d85"} err="failed to get container status \"5eb04509896cf0a0925ed7ffef7304c549ef9c0c4057c97f15e08a17025a2d85\": rpc error: code = NotFound desc = could not find container \"5eb04509896cf0a0925ed7ffef7304c549ef9c0c4057c97f15e08a17025a2d85\": container with ID starting with 5eb04509896cf0a0925ed7ffef7304c549ef9c0c4057c97f15e08a17025a2d85 not found: ID does not exist" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.815725 4775 scope.go:117] "RemoveContainer" containerID="a8cf445d0557f4b8d02802ff384739d41a1d6499fb5910e762bfaffa0d95deda" Jan 23 14:36:18 crc kubenswrapper[4775]: E0123 14:36:18.816058 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8cf445d0557f4b8d02802ff384739d41a1d6499fb5910e762bfaffa0d95deda\": container with ID starting with a8cf445d0557f4b8d02802ff384739d41a1d6499fb5910e762bfaffa0d95deda not found: ID does not exist" containerID="a8cf445d0557f4b8d02802ff384739d41a1d6499fb5910e762bfaffa0d95deda" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.816099 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8cf445d0557f4b8d02802ff384739d41a1d6499fb5910e762bfaffa0d95deda"} err="failed to get container status \"a8cf445d0557f4b8d02802ff384739d41a1d6499fb5910e762bfaffa0d95deda\": rpc error: code = NotFound desc = could not find container \"a8cf445d0557f4b8d02802ff384739d41a1d6499fb5910e762bfaffa0d95deda\": container with ID starting with a8cf445d0557f4b8d02802ff384739d41a1d6499fb5910e762bfaffa0d95deda not found: ID does not exist" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.816126 4775 scope.go:117] "RemoveContainer" containerID="5eb04509896cf0a0925ed7ffef7304c549ef9c0c4057c97f15e08a17025a2d85" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.817739 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5eb04509896cf0a0925ed7ffef7304c549ef9c0c4057c97f15e08a17025a2d85"} err="failed to get container status \"5eb04509896cf0a0925ed7ffef7304c549ef9c0c4057c97f15e08a17025a2d85\": rpc error: code = NotFound desc = could not find container \"5eb04509896cf0a0925ed7ffef7304c549ef9c0c4057c97f15e08a17025a2d85\": container with ID starting with 5eb04509896cf0a0925ed7ffef7304c549ef9c0c4057c97f15e08a17025a2d85 not found: ID does not exist" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.817757 4775 scope.go:117] "RemoveContainer" containerID="a8cf445d0557f4b8d02802ff384739d41a1d6499fb5910e762bfaffa0d95deda" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.818051 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8cf445d0557f4b8d02802ff384739d41a1d6499fb5910e762bfaffa0d95deda"} err="failed to get container status \"a8cf445d0557f4b8d02802ff384739d41a1d6499fb5910e762bfaffa0d95deda\": rpc error: code = NotFound desc = could not find container \"a8cf445d0557f4b8d02802ff384739d41a1d6499fb5910e762bfaffa0d95deda\": container with ID starting with a8cf445d0557f4b8d02802ff384739d41a1d6499fb5910e762bfaffa0d95deda not found: ID does not exist" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.841578 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqng4\" (UniqueName: \"kubernetes.io/projected/5105347b-2714-4def-a8e9-8f2e72aa6a0e-kube-api-access-xqng4\") pod \"nova-kuttl-api-0\" (UID: \"5105347b-2714-4def-a8e9-8f2e72aa6a0e\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.841639 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5105347b-2714-4def-a8e9-8f2e72aa6a0e-config-data\") pod \"nova-kuttl-api-0\" (UID: \"5105347b-2714-4def-a8e9-8f2e72aa6a0e\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.841754 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5105347b-2714-4def-a8e9-8f2e72aa6a0e-logs\") pod \"nova-kuttl-api-0\" (UID: \"5105347b-2714-4def-a8e9-8f2e72aa6a0e\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.841831 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfbgh\" (UniqueName: \"kubernetes.io/projected/25188693-7059-4db5-88d0-6e36d8d2d4ed-kube-api-access-xfbgh\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.841842 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25188693-7059-4db5-88d0-6e36d8d2d4ed-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.841851 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25188693-7059-4db5-88d0-6e36d8d2d4ed-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.943725 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdgzh\" (UniqueName: \"kubernetes.io/projected/e2f66b57-925b-4d68-9917-77fded405cfd-kube-api-access-qdgzh\") pod \"e2f66b57-925b-4d68-9917-77fded405cfd\" (UID: \"e2f66b57-925b-4d68-9917-77fded405cfd\") " Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.944645 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2f66b57-925b-4d68-9917-77fded405cfd-config-data\") pod \"e2f66b57-925b-4d68-9917-77fded405cfd\" (UID: \"e2f66b57-925b-4d68-9917-77fded405cfd\") " Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.945132 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqng4\" (UniqueName: \"kubernetes.io/projected/5105347b-2714-4def-a8e9-8f2e72aa6a0e-kube-api-access-xqng4\") pod \"nova-kuttl-api-0\" (UID: \"5105347b-2714-4def-a8e9-8f2e72aa6a0e\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.945337 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5105347b-2714-4def-a8e9-8f2e72aa6a0e-config-data\") pod \"nova-kuttl-api-0\" (UID: \"5105347b-2714-4def-a8e9-8f2e72aa6a0e\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.945626 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5105347b-2714-4def-a8e9-8f2e72aa6a0e-logs\") pod \"nova-kuttl-api-0\" (UID: \"5105347b-2714-4def-a8e9-8f2e72aa6a0e\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.946473 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5105347b-2714-4def-a8e9-8f2e72aa6a0e-logs\") pod \"nova-kuttl-api-0\" (UID: \"5105347b-2714-4def-a8e9-8f2e72aa6a0e\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.949466 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2f66b57-925b-4d68-9917-77fded405cfd-kube-api-access-qdgzh" (OuterVolumeSpecName: "kube-api-access-qdgzh") pod "e2f66b57-925b-4d68-9917-77fded405cfd" (UID: "e2f66b57-925b-4d68-9917-77fded405cfd"). InnerVolumeSpecName "kube-api-access-qdgzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.951081 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5105347b-2714-4def-a8e9-8f2e72aa6a0e-config-data\") pod \"nova-kuttl-api-0\" (UID: \"5105347b-2714-4def-a8e9-8f2e72aa6a0e\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.964885 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqng4\" (UniqueName: \"kubernetes.io/projected/5105347b-2714-4def-a8e9-8f2e72aa6a0e-kube-api-access-xqng4\") pod \"nova-kuttl-api-0\" (UID: \"5105347b-2714-4def-a8e9-8f2e72aa6a0e\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:18 crc kubenswrapper[4775]: I0123 14:36:18.966445 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2f66b57-925b-4d68-9917-77fded405cfd-config-data" (OuterVolumeSpecName: "config-data") pod "e2f66b57-925b-4d68-9917-77fded405cfd" (UID: "e2f66b57-925b-4d68-9917-77fded405cfd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.041831 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.047141 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdgzh\" (UniqueName: \"kubernetes.io/projected/e2f66b57-925b-4d68-9917-77fded405cfd-kube-api-access-qdgzh\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.047231 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2f66b57-925b-4d68-9917-77fded405cfd-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.053225 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.059329 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:36:19 crc kubenswrapper[4775]: E0123 14:36:19.059920 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2f66b57-925b-4d68-9917-77fded405cfd" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.059944 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2f66b57-925b-4d68-9917-77fded405cfd" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.060123 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2f66b57-925b-4d68-9917-77fded405cfd" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.061278 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.063018 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.065919 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.067109 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.250408 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.250717 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.250797 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tptv7\" (UniqueName: \"kubernetes.io/projected/5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c-kube-api-access-tptv7\") pod \"nova-kuttl-metadata-0\" (UID: \"5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.352152 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.352253 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tptv7\" (UniqueName: \"kubernetes.io/projected/5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c-kube-api-access-tptv7\") pod \"nova-kuttl-metadata-0\" (UID: \"5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.352315 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.352659 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.368003 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.368268 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tptv7\" (UniqueName: \"kubernetes.io/projected/5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c-kube-api-access-tptv7\") pod \"nova-kuttl-metadata-0\" (UID: \"5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.454487 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.532450 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:36:19 crc kubenswrapper[4775]: W0123 14:36:19.542738 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5105347b_2714_4def_a8e9_8f2e72aa6a0e.slice/crio-d9e4670bc66038767fb0431260ce9a971d6918fc58f17b5251c95f35343d4184 WatchSource:0}: Error finding container d9e4670bc66038767fb0431260ce9a971d6918fc58f17b5251c95f35343d4184: Status 404 returned error can't find the container with id d9e4670bc66038767fb0431260ce9a971d6918fc58f17b5251c95f35343d4184 Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.700497 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" event={"ID":"1fd448a3-6897-490f-9c92-98590cee53ca","Type":"ContainerStarted","Data":"ccf720a5ab1296bf47d71ac94be34331eb7970f511d53c5e0642348a94e0e693"} Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.700876 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.705854 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"5105347b-2714-4def-a8e9-8f2e72aa6a0e","Type":"ContainerStarted","Data":"d9e4670bc66038767fb0431260ce9a971d6918fc58f17b5251c95f35343d4184"} Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.708146 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"e2f66b57-925b-4d68-9917-77fded405cfd","Type":"ContainerDied","Data":"c2b7b80f0829ed1270ce5358668a91d6a85cf45703f80218b39e2c994e384bba"} Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.708189 4775 scope.go:117] "RemoveContainer" containerID="2b8e3f238a5f79ab38d936a701163189da7e99d755c73a0f5f5797f13ecc3f18" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.708290 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.730211 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25188693-7059-4db5-88d0-6e36d8d2d4ed" path="/var/lib/kubelet/pods/25188693-7059-4db5-88d0-6e36d8d2d4ed/volumes" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.731177 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b" path="/var/lib/kubelet/pods/54fdd9aa-36fd-4817-9902-8e9e7c4d6f2b/volumes" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.753425 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" podStartSLOduration=2.753402492 podStartE2EDuration="2.753402492s" podCreationTimestamp="2026-01-23 14:36:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:36:19.732149399 +0000 UTC m=+1926.726978149" watchObservedRunningTime="2026-01-23 14:36:19.753402492 +0000 UTC m=+1926.748231232" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.762822 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.775478 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.786887 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.787828 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.790116 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.794324 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.870222 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a43d5dd7-2b7b-4806-b358-976cf374cd43-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"a43d5dd7-2b7b-4806-b358-976cf374cd43\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.870441 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsvjz\" (UniqueName: \"kubernetes.io/projected/a43d5dd7-2b7b-4806-b358-976cf374cd43-kube-api-access-qsvjz\") pod \"nova-kuttl-scheduler-0\" (UID: \"a43d5dd7-2b7b-4806-b358-976cf374cd43\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.956629 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:36:19 crc kubenswrapper[4775]: W0123 14:36:19.974746 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5dd9eb2a_b8f2_46dc_bf7e_84a3ed13464c.slice/crio-caa9115385ef2dd139755ddcad55b0d6b1eaeda47902eb07664c2a2e9e6d25fe WatchSource:0}: Error finding container caa9115385ef2dd139755ddcad55b0d6b1eaeda47902eb07664c2a2e9e6d25fe: Status 404 returned error can't find the container with id caa9115385ef2dd139755ddcad55b0d6b1eaeda47902eb07664c2a2e9e6d25fe Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.977475 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a43d5dd7-2b7b-4806-b358-976cf374cd43-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"a43d5dd7-2b7b-4806-b358-976cf374cd43\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.977525 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsvjz\" (UniqueName: \"kubernetes.io/projected/a43d5dd7-2b7b-4806-b358-976cf374cd43-kube-api-access-qsvjz\") pod \"nova-kuttl-scheduler-0\" (UID: \"a43d5dd7-2b7b-4806-b358-976cf374cd43\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.981877 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a43d5dd7-2b7b-4806-b358-976cf374cd43-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"a43d5dd7-2b7b-4806-b358-976cf374cd43\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:19 crc kubenswrapper[4775]: I0123 14:36:19.996503 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsvjz\" (UniqueName: \"kubernetes.io/projected/a43d5dd7-2b7b-4806-b358-976cf374cd43-kube-api-access-qsvjz\") pod \"nova-kuttl-scheduler-0\" (UID: \"a43d5dd7-2b7b-4806-b358-976cf374cd43\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:20 crc kubenswrapper[4775]: I0123 14:36:20.103843 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:20 crc kubenswrapper[4775]: I0123 14:36:20.560868 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:36:20 crc kubenswrapper[4775]: W0123 14:36:20.568058 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda43d5dd7_2b7b_4806_b358_976cf374cd43.slice/crio-5ab988d9e205a230e0cf678a8c867f1fb120bd7bcf1bf799de80ca2df82b80ed WatchSource:0}: Error finding container 5ab988d9e205a230e0cf678a8c867f1fb120bd7bcf1bf799de80ca2df82b80ed: Status 404 returned error can't find the container with id 5ab988d9e205a230e0cf678a8c867f1fb120bd7bcf1bf799de80ca2df82b80ed Jan 23 14:36:20 crc kubenswrapper[4775]: I0123 14:36:20.716569 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"5105347b-2714-4def-a8e9-8f2e72aa6a0e","Type":"ContainerStarted","Data":"00e4a30509a85fcd43493ad6ff99a3894421472f9f200b72e0d40abb4cb63325"} Jan 23 14:36:20 crc kubenswrapper[4775]: I0123 14:36:20.716963 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"5105347b-2714-4def-a8e9-8f2e72aa6a0e","Type":"ContainerStarted","Data":"cddd25a43481286d21a7942ec0a19f14f1525739081c8dcfc723d00716195f00"} Jan 23 14:36:20 crc kubenswrapper[4775]: I0123 14:36:20.722279 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c","Type":"ContainerStarted","Data":"b5fa7a3b853cc420f905b785b5f3e45bf3cc366b5a136d58547716107cff7818"} Jan 23 14:36:20 crc kubenswrapper[4775]: I0123 14:36:20.722309 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c","Type":"ContainerStarted","Data":"a52b2b8897ad657ebbfabc500688e10f06efb29fd89d42f56d12ede604cb2920"} Jan 23 14:36:20 crc kubenswrapper[4775]: I0123 14:36:20.722321 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c","Type":"ContainerStarted","Data":"caa9115385ef2dd139755ddcad55b0d6b1eaeda47902eb07664c2a2e9e6d25fe"} Jan 23 14:36:20 crc kubenswrapper[4775]: I0123 14:36:20.723632 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"a43d5dd7-2b7b-4806-b358-976cf374cd43","Type":"ContainerStarted","Data":"5ab988d9e205a230e0cf678a8c867f1fb120bd7bcf1bf799de80ca2df82b80ed"} Jan 23 14:36:20 crc kubenswrapper[4775]: I0123 14:36:20.739338 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.7393227060000003 podStartE2EDuration="2.739322706s" podCreationTimestamp="2026-01-23 14:36:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:36:20.737218069 +0000 UTC m=+1927.732046809" watchObservedRunningTime="2026-01-23 14:36:20.739322706 +0000 UTC m=+1927.734151446" Jan 23 14:36:20 crc kubenswrapper[4775]: I0123 14:36:20.760457 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=1.760443805 podStartE2EDuration="1.760443805s" podCreationTimestamp="2026-01-23 14:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:36:20.752735367 +0000 UTC m=+1927.747564117" watchObservedRunningTime="2026-01-23 14:36:20.760443805 +0000 UTC m=+1927.755272545" Jan 23 14:36:20 crc kubenswrapper[4775]: I0123 14:36:20.780761 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=1.780747182 podStartE2EDuration="1.780747182s" podCreationTimestamp="2026-01-23 14:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:36:20.772482329 +0000 UTC m=+1927.767311069" watchObservedRunningTime="2026-01-23 14:36:20.780747182 +0000 UTC m=+1927.775575932" Jan 23 14:36:20 crc kubenswrapper[4775]: I0123 14:36:20.951037 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:36:20 crc kubenswrapper[4775]: I0123 14:36:20.961356 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:36:21 crc kubenswrapper[4775]: I0123 14:36:21.731404 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2f66b57-925b-4d68-9917-77fded405cfd" path="/var/lib/kubelet/pods/e2f66b57-925b-4d68-9917-77fded405cfd/volumes" Jan 23 14:36:21 crc kubenswrapper[4775]: I0123 14:36:21.740470 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"a43d5dd7-2b7b-4806-b358-976cf374cd43","Type":"ContainerStarted","Data":"37f72809eb5ac9ef19d9f3238fb00e4dda525d2962892965c464bb0691074a87"} Jan 23 14:36:21 crc kubenswrapper[4775]: I0123 14:36:21.751830 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-novncproxy-0" Jan 23 14:36:23 crc kubenswrapper[4775]: I0123 14:36:23.175339 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-cell1-conductor-0" Jan 23 14:36:23 crc kubenswrapper[4775]: I0123 14:36:23.752707 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8"] Jan 23 14:36:23 crc kubenswrapper[4775]: I0123 14:36:23.754161 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8"] Jan 23 14:36:23 crc kubenswrapper[4775]: I0123 14:36:23.754268 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8" Jan 23 14:36:23 crc kubenswrapper[4775]: I0123 14:36:23.757160 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-config-data" Jan 23 14:36:23 crc kubenswrapper[4775]: I0123 14:36:23.758379 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell1-manage-scripts" Jan 23 14:36:23 crc kubenswrapper[4775]: I0123 14:36:23.844412 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8tsw\" (UniqueName: \"kubernetes.io/projected/3ef19dc5-1d78-479c-8220-340c46c44bdf-kube-api-access-h8tsw\") pod \"nova-kuttl-cell1-cell-mapping-4gfb8\" (UID: \"3ef19dc5-1d78-479c-8220-340c46c44bdf\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8" Jan 23 14:36:23 crc kubenswrapper[4775]: I0123 14:36:23.844486 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef19dc5-1d78-479c-8220-340c46c44bdf-scripts\") pod \"nova-kuttl-cell1-cell-mapping-4gfb8\" (UID: \"3ef19dc5-1d78-479c-8220-340c46c44bdf\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8" Jan 23 14:36:23 crc kubenswrapper[4775]: I0123 14:36:23.844526 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef19dc5-1d78-479c-8220-340c46c44bdf-config-data\") pod \"nova-kuttl-cell1-cell-mapping-4gfb8\" (UID: \"3ef19dc5-1d78-479c-8220-340c46c44bdf\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8" Jan 23 14:36:23 crc kubenswrapper[4775]: I0123 14:36:23.946918 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8tsw\" (UniqueName: \"kubernetes.io/projected/3ef19dc5-1d78-479c-8220-340c46c44bdf-kube-api-access-h8tsw\") pod \"nova-kuttl-cell1-cell-mapping-4gfb8\" (UID: \"3ef19dc5-1d78-479c-8220-340c46c44bdf\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8" Jan 23 14:36:23 crc kubenswrapper[4775]: I0123 14:36:23.946994 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef19dc5-1d78-479c-8220-340c46c44bdf-scripts\") pod \"nova-kuttl-cell1-cell-mapping-4gfb8\" (UID: \"3ef19dc5-1d78-479c-8220-340c46c44bdf\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8" Jan 23 14:36:23 crc kubenswrapper[4775]: I0123 14:36:23.947035 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef19dc5-1d78-479c-8220-340c46c44bdf-config-data\") pod \"nova-kuttl-cell1-cell-mapping-4gfb8\" (UID: \"3ef19dc5-1d78-479c-8220-340c46c44bdf\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8" Jan 23 14:36:23 crc kubenswrapper[4775]: I0123 14:36:23.955743 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef19dc5-1d78-479c-8220-340c46c44bdf-config-data\") pod \"nova-kuttl-cell1-cell-mapping-4gfb8\" (UID: \"3ef19dc5-1d78-479c-8220-340c46c44bdf\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8" Jan 23 14:36:23 crc kubenswrapper[4775]: I0123 14:36:23.968188 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef19dc5-1d78-479c-8220-340c46c44bdf-scripts\") pod \"nova-kuttl-cell1-cell-mapping-4gfb8\" (UID: \"3ef19dc5-1d78-479c-8220-340c46c44bdf\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8" Jan 23 14:36:23 crc kubenswrapper[4775]: I0123 14:36:23.976936 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8tsw\" (UniqueName: \"kubernetes.io/projected/3ef19dc5-1d78-479c-8220-340c46c44bdf-kube-api-access-h8tsw\") pod \"nova-kuttl-cell1-cell-mapping-4gfb8\" (UID: \"3ef19dc5-1d78-479c-8220-340c46c44bdf\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8" Jan 23 14:36:24 crc kubenswrapper[4775]: I0123 14:36:24.093116 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8" Jan 23 14:36:24 crc kubenswrapper[4775]: I0123 14:36:24.455426 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:24 crc kubenswrapper[4775]: I0123 14:36:24.455534 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:24 crc kubenswrapper[4775]: I0123 14:36:24.561243 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8"] Jan 23 14:36:24 crc kubenswrapper[4775]: W0123 14:36:24.566293 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ef19dc5_1d78_479c_8220_340c46c44bdf.slice/crio-81cd66e9db4ececd53a92271283d2bc06593ff34f6f02f3860ff6bae30de38ac WatchSource:0}: Error finding container 81cd66e9db4ececd53a92271283d2bc06593ff34f6f02f3860ff6bae30de38ac: Status 404 returned error can't find the container with id 81cd66e9db4ececd53a92271283d2bc06593ff34f6f02f3860ff6bae30de38ac Jan 23 14:36:24 crc kubenswrapper[4775]: I0123 14:36:24.784435 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8" event={"ID":"3ef19dc5-1d78-479c-8220-340c46c44bdf","Type":"ContainerStarted","Data":"7866fa95041ef01597a04bb378890e5ad494e3f63a1535140905408dc45663a9"} Jan 23 14:36:24 crc kubenswrapper[4775]: I0123 14:36:24.785090 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8" event={"ID":"3ef19dc5-1d78-479c-8220-340c46c44bdf","Type":"ContainerStarted","Data":"81cd66e9db4ececd53a92271283d2bc06593ff34f6f02f3860ff6bae30de38ac"} Jan 23 14:36:24 crc kubenswrapper[4775]: I0123 14:36:24.804219 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8" podStartSLOduration=1.8042027059999999 podStartE2EDuration="1.804202706s" podCreationTimestamp="2026-01-23 14:36:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:36:24.802521291 +0000 UTC m=+1931.797350041" watchObservedRunningTime="2026-01-23 14:36:24.804202706 +0000 UTC m=+1931.799031456" Jan 23 14:36:25 crc kubenswrapper[4775]: I0123 14:36:25.104424 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:29 crc kubenswrapper[4775]: I0123 14:36:29.068749 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:29 crc kubenswrapper[4775]: I0123 14:36:29.069441 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:29 crc kubenswrapper[4775]: I0123 14:36:29.454927 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:29 crc kubenswrapper[4775]: I0123 14:36:29.455394 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:29 crc kubenswrapper[4775]: I0123 14:36:29.838297 4775 generic.go:334] "Generic (PLEG): container finished" podID="3ef19dc5-1d78-479c-8220-340c46c44bdf" containerID="7866fa95041ef01597a04bb378890e5ad494e3f63a1535140905408dc45663a9" exitCode=0 Jan 23 14:36:29 crc kubenswrapper[4775]: I0123 14:36:29.838384 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8" event={"ID":"3ef19dc5-1d78-479c-8220-340c46c44bdf","Type":"ContainerDied","Data":"7866fa95041ef01597a04bb378890e5ad494e3f63a1535140905408dc45663a9"} Jan 23 14:36:30 crc kubenswrapper[4775]: I0123 14:36:30.105157 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:30 crc kubenswrapper[4775]: I0123 14:36:30.150099 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="5105347b-2714-4def-a8e9-8f2e72aa6a0e" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.226:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:36:30 crc kubenswrapper[4775]: I0123 14:36:30.150338 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="5105347b-2714-4def-a8e9-8f2e72aa6a0e" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.226:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:36:30 crc kubenswrapper[4775]: I0123 14:36:30.151174 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:30 crc kubenswrapper[4775]: I0123 14:36:30.537080 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.227:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:36:30 crc kubenswrapper[4775]: I0123 14:36:30.537072 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.227:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:36:30 crc kubenswrapper[4775]: I0123 14:36:30.934118 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:31 crc kubenswrapper[4775]: I0123 14:36:31.275340 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8" Jan 23 14:36:31 crc kubenswrapper[4775]: I0123 14:36:31.385608 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8tsw\" (UniqueName: \"kubernetes.io/projected/3ef19dc5-1d78-479c-8220-340c46c44bdf-kube-api-access-h8tsw\") pod \"3ef19dc5-1d78-479c-8220-340c46c44bdf\" (UID: \"3ef19dc5-1d78-479c-8220-340c46c44bdf\") " Jan 23 14:36:31 crc kubenswrapper[4775]: I0123 14:36:31.385691 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef19dc5-1d78-479c-8220-340c46c44bdf-config-data\") pod \"3ef19dc5-1d78-479c-8220-340c46c44bdf\" (UID: \"3ef19dc5-1d78-479c-8220-340c46c44bdf\") " Jan 23 14:36:31 crc kubenswrapper[4775]: I0123 14:36:31.385792 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef19dc5-1d78-479c-8220-340c46c44bdf-scripts\") pod \"3ef19dc5-1d78-479c-8220-340c46c44bdf\" (UID: \"3ef19dc5-1d78-479c-8220-340c46c44bdf\") " Jan 23 14:36:31 crc kubenswrapper[4775]: I0123 14:36:31.391115 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ef19dc5-1d78-479c-8220-340c46c44bdf-scripts" (OuterVolumeSpecName: "scripts") pod "3ef19dc5-1d78-479c-8220-340c46c44bdf" (UID: "3ef19dc5-1d78-479c-8220-340c46c44bdf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:36:31 crc kubenswrapper[4775]: I0123 14:36:31.394128 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ef19dc5-1d78-479c-8220-340c46c44bdf-kube-api-access-h8tsw" (OuterVolumeSpecName: "kube-api-access-h8tsw") pod "3ef19dc5-1d78-479c-8220-340c46c44bdf" (UID: "3ef19dc5-1d78-479c-8220-340c46c44bdf"). InnerVolumeSpecName "kube-api-access-h8tsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:36:31 crc kubenswrapper[4775]: I0123 14:36:31.410649 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ef19dc5-1d78-479c-8220-340c46c44bdf-config-data" (OuterVolumeSpecName: "config-data") pod "3ef19dc5-1d78-479c-8220-340c46c44bdf" (UID: "3ef19dc5-1d78-479c-8220-340c46c44bdf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:36:31 crc kubenswrapper[4775]: I0123 14:36:31.487670 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8tsw\" (UniqueName: \"kubernetes.io/projected/3ef19dc5-1d78-479c-8220-340c46c44bdf-kube-api-access-h8tsw\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:31 crc kubenswrapper[4775]: I0123 14:36:31.487707 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef19dc5-1d78-479c-8220-340c46c44bdf-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:31 crc kubenswrapper[4775]: I0123 14:36:31.487716 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef19dc5-1d78-479c-8220-340c46c44bdf-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:31 crc kubenswrapper[4775]: I0123 14:36:31.866874 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8" event={"ID":"3ef19dc5-1d78-479c-8220-340c46c44bdf","Type":"ContainerDied","Data":"81cd66e9db4ececd53a92271283d2bc06593ff34f6f02f3860ff6bae30de38ac"} Jan 23 14:36:31 crc kubenswrapper[4775]: I0123 14:36:31.866919 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81cd66e9db4ececd53a92271283d2bc06593ff34f6f02f3860ff6bae30de38ac" Jan 23 14:36:31 crc kubenswrapper[4775]: I0123 14:36:31.866944 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8" Jan 23 14:36:32 crc kubenswrapper[4775]: I0123 14:36:32.155840 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:36:32 crc kubenswrapper[4775]: I0123 14:36:32.156144 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="5105347b-2714-4def-a8e9-8f2e72aa6a0e" containerName="nova-kuttl-api-log" containerID="cri-o://cddd25a43481286d21a7942ec0a19f14f1525739081c8dcfc723d00716195f00" gracePeriod=30 Jan 23 14:36:32 crc kubenswrapper[4775]: I0123 14:36:32.156178 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="5105347b-2714-4def-a8e9-8f2e72aa6a0e" containerName="nova-kuttl-api-api" containerID="cri-o://00e4a30509a85fcd43493ad6ff99a3894421472f9f200b72e0d40abb4cb63325" gracePeriod=30 Jan 23 14:36:32 crc kubenswrapper[4775]: I0123 14:36:32.171866 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:36:32 crc kubenswrapper[4775]: I0123 14:36:32.234609 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:36:32 crc kubenswrapper[4775]: I0123 14:36:32.235154 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c" containerName="nova-kuttl-metadata-log" containerID="cri-o://a52b2b8897ad657ebbfabc500688e10f06efb29fd89d42f56d12ede604cb2920" gracePeriod=30 Jan 23 14:36:32 crc kubenswrapper[4775]: I0123 14:36:32.235274 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c" containerName="nova-kuttl-metadata-metadata" containerID="cri-o://b5fa7a3b853cc420f905b785b5f3e45bf3cc366b5a136d58547716107cff7818" gracePeriod=30 Jan 23 14:36:32 crc kubenswrapper[4775]: I0123 14:36:32.892988 4775 generic.go:334] "Generic (PLEG): container finished" podID="5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c" containerID="a52b2b8897ad657ebbfabc500688e10f06efb29fd89d42f56d12ede604cb2920" exitCode=143 Jan 23 14:36:32 crc kubenswrapper[4775]: I0123 14:36:32.893127 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c","Type":"ContainerDied","Data":"a52b2b8897ad657ebbfabc500688e10f06efb29fd89d42f56d12ede604cb2920"} Jan 23 14:36:32 crc kubenswrapper[4775]: I0123 14:36:32.896791 4775 generic.go:334] "Generic (PLEG): container finished" podID="5105347b-2714-4def-a8e9-8f2e72aa6a0e" containerID="cddd25a43481286d21a7942ec0a19f14f1525739081c8dcfc723d00716195f00" exitCode=143 Jan 23 14:36:32 crc kubenswrapper[4775]: I0123 14:36:32.896865 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"5105347b-2714-4def-a8e9-8f2e72aa6a0e","Type":"ContainerDied","Data":"cddd25a43481286d21a7942ec0a19f14f1525739081c8dcfc723d00716195f00"} Jan 23 14:36:32 crc kubenswrapper[4775]: I0123 14:36:32.897109 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="a43d5dd7-2b7b-4806-b358-976cf374cd43" containerName="nova-kuttl-scheduler-scheduler" containerID="cri-o://37f72809eb5ac9ef19d9f3238fb00e4dda525d2962892965c464bb0691074a87" gracePeriod=30 Jan 23 14:36:35 crc kubenswrapper[4775]: E0123 14:36:35.106713 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="37f72809eb5ac9ef19d9f3238fb00e4dda525d2962892965c464bb0691074a87" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 14:36:35 crc kubenswrapper[4775]: E0123 14:36:35.110997 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="37f72809eb5ac9ef19d9f3238fb00e4dda525d2962892965c464bb0691074a87" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 14:36:35 crc kubenswrapper[4775]: E0123 14:36:35.113416 4775 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="37f72809eb5ac9ef19d9f3238fb00e4dda525d2962892965c464bb0691074a87" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 23 14:36:35 crc kubenswrapper[4775]: E0123 14:36:35.113473 4775 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podUID="a43d5dd7-2b7b-4806-b358-976cf374cd43" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.816797 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.827173 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.934362 4775 generic.go:334] "Generic (PLEG): container finished" podID="5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c" containerID="b5fa7a3b853cc420f905b785b5f3e45bf3cc366b5a136d58547716107cff7818" exitCode=0 Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.934465 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c","Type":"ContainerDied","Data":"b5fa7a3b853cc420f905b785b5f3e45bf3cc366b5a136d58547716107cff7818"} Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.934507 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c","Type":"ContainerDied","Data":"caa9115385ef2dd139755ddcad55b0d6b1eaeda47902eb07664c2a2e9e6d25fe"} Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.934723 4775 scope.go:117] "RemoveContainer" containerID="b5fa7a3b853cc420f905b785b5f3e45bf3cc366b5a136d58547716107cff7818" Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.934933 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.937663 4775 generic.go:334] "Generic (PLEG): container finished" podID="5105347b-2714-4def-a8e9-8f2e72aa6a0e" containerID="00e4a30509a85fcd43493ad6ff99a3894421472f9f200b72e0d40abb4cb63325" exitCode=0 Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.937693 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"5105347b-2714-4def-a8e9-8f2e72aa6a0e","Type":"ContainerDied","Data":"00e4a30509a85fcd43493ad6ff99a3894421472f9f200b72e0d40abb4cb63325"} Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.937717 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"5105347b-2714-4def-a8e9-8f2e72aa6a0e","Type":"ContainerDied","Data":"d9e4670bc66038767fb0431260ce9a971d6918fc58f17b5251c95f35343d4184"} Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.937767 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.956890 4775 scope.go:117] "RemoveContainer" containerID="a52b2b8897ad657ebbfabc500688e10f06efb29fd89d42f56d12ede604cb2920" Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.972329 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5105347b-2714-4def-a8e9-8f2e72aa6a0e-config-data\") pod \"5105347b-2714-4def-a8e9-8f2e72aa6a0e\" (UID: \"5105347b-2714-4def-a8e9-8f2e72aa6a0e\") " Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.972387 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqng4\" (UniqueName: \"kubernetes.io/projected/5105347b-2714-4def-a8e9-8f2e72aa6a0e-kube-api-access-xqng4\") pod \"5105347b-2714-4def-a8e9-8f2e72aa6a0e\" (UID: \"5105347b-2714-4def-a8e9-8f2e72aa6a0e\") " Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.972455 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5105347b-2714-4def-a8e9-8f2e72aa6a0e-logs\") pod \"5105347b-2714-4def-a8e9-8f2e72aa6a0e\" (UID: \"5105347b-2714-4def-a8e9-8f2e72aa6a0e\") " Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.972531 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c-logs\") pod \"5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c\" (UID: \"5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c\") " Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.972555 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tptv7\" (UniqueName: \"kubernetes.io/projected/5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c-kube-api-access-tptv7\") pod \"5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c\" (UID: \"5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c\") " Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.972584 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c-config-data\") pod \"5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c\" (UID: \"5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c\") " Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.973053 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c-logs" (OuterVolumeSpecName: "logs") pod "5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c" (UID: "5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.973372 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5105347b-2714-4def-a8e9-8f2e72aa6a0e-logs" (OuterVolumeSpecName: "logs") pod "5105347b-2714-4def-a8e9-8f2e72aa6a0e" (UID: "5105347b-2714-4def-a8e9-8f2e72aa6a0e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.978021 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c-kube-api-access-tptv7" (OuterVolumeSpecName: "kube-api-access-tptv7") pod "5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c" (UID: "5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c"). InnerVolumeSpecName "kube-api-access-tptv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.978119 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5105347b-2714-4def-a8e9-8f2e72aa6a0e-kube-api-access-xqng4" (OuterVolumeSpecName: "kube-api-access-xqng4") pod "5105347b-2714-4def-a8e9-8f2e72aa6a0e" (UID: "5105347b-2714-4def-a8e9-8f2e72aa6a0e"). InnerVolumeSpecName "kube-api-access-xqng4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.981659 4775 scope.go:117] "RemoveContainer" containerID="b5fa7a3b853cc420f905b785b5f3e45bf3cc366b5a136d58547716107cff7818" Jan 23 14:36:35 crc kubenswrapper[4775]: E0123 14:36:35.982153 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5fa7a3b853cc420f905b785b5f3e45bf3cc366b5a136d58547716107cff7818\": container with ID starting with b5fa7a3b853cc420f905b785b5f3e45bf3cc366b5a136d58547716107cff7818 not found: ID does not exist" containerID="b5fa7a3b853cc420f905b785b5f3e45bf3cc366b5a136d58547716107cff7818" Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.982191 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5fa7a3b853cc420f905b785b5f3e45bf3cc366b5a136d58547716107cff7818"} err="failed to get container status \"b5fa7a3b853cc420f905b785b5f3e45bf3cc366b5a136d58547716107cff7818\": rpc error: code = NotFound desc = could not find container \"b5fa7a3b853cc420f905b785b5f3e45bf3cc366b5a136d58547716107cff7818\": container with ID starting with b5fa7a3b853cc420f905b785b5f3e45bf3cc366b5a136d58547716107cff7818 not found: ID does not exist" Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.982217 4775 scope.go:117] "RemoveContainer" containerID="a52b2b8897ad657ebbfabc500688e10f06efb29fd89d42f56d12ede604cb2920" Jan 23 14:36:35 crc kubenswrapper[4775]: E0123 14:36:35.982621 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a52b2b8897ad657ebbfabc500688e10f06efb29fd89d42f56d12ede604cb2920\": container with ID starting with a52b2b8897ad657ebbfabc500688e10f06efb29fd89d42f56d12ede604cb2920 not found: ID does not exist" containerID="a52b2b8897ad657ebbfabc500688e10f06efb29fd89d42f56d12ede604cb2920" Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.982657 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a52b2b8897ad657ebbfabc500688e10f06efb29fd89d42f56d12ede604cb2920"} err="failed to get container status \"a52b2b8897ad657ebbfabc500688e10f06efb29fd89d42f56d12ede604cb2920\": rpc error: code = NotFound desc = could not find container \"a52b2b8897ad657ebbfabc500688e10f06efb29fd89d42f56d12ede604cb2920\": container with ID starting with a52b2b8897ad657ebbfabc500688e10f06efb29fd89d42f56d12ede604cb2920 not found: ID does not exist" Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.982678 4775 scope.go:117] "RemoveContainer" containerID="00e4a30509a85fcd43493ad6ff99a3894421472f9f200b72e0d40abb4cb63325" Jan 23 14:36:35 crc kubenswrapper[4775]: I0123 14:36:35.993134 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c-config-data" (OuterVolumeSpecName: "config-data") pod "5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c" (UID: "5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.004552 4775 scope.go:117] "RemoveContainer" containerID="cddd25a43481286d21a7942ec0a19f14f1525739081c8dcfc723d00716195f00" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.012987 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5105347b-2714-4def-a8e9-8f2e72aa6a0e-config-data" (OuterVolumeSpecName: "config-data") pod "5105347b-2714-4def-a8e9-8f2e72aa6a0e" (UID: "5105347b-2714-4def-a8e9-8f2e72aa6a0e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.023022 4775 scope.go:117] "RemoveContainer" containerID="00e4a30509a85fcd43493ad6ff99a3894421472f9f200b72e0d40abb4cb63325" Jan 23 14:36:36 crc kubenswrapper[4775]: E0123 14:36:36.023396 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00e4a30509a85fcd43493ad6ff99a3894421472f9f200b72e0d40abb4cb63325\": container with ID starting with 00e4a30509a85fcd43493ad6ff99a3894421472f9f200b72e0d40abb4cb63325 not found: ID does not exist" containerID="00e4a30509a85fcd43493ad6ff99a3894421472f9f200b72e0d40abb4cb63325" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.023434 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00e4a30509a85fcd43493ad6ff99a3894421472f9f200b72e0d40abb4cb63325"} err="failed to get container status \"00e4a30509a85fcd43493ad6ff99a3894421472f9f200b72e0d40abb4cb63325\": rpc error: code = NotFound desc = could not find container \"00e4a30509a85fcd43493ad6ff99a3894421472f9f200b72e0d40abb4cb63325\": container with ID starting with 00e4a30509a85fcd43493ad6ff99a3894421472f9f200b72e0d40abb4cb63325 not found: ID does not exist" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.023459 4775 scope.go:117] "RemoveContainer" containerID="cddd25a43481286d21a7942ec0a19f14f1525739081c8dcfc723d00716195f00" Jan 23 14:36:36 crc kubenswrapper[4775]: E0123 14:36:36.023792 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cddd25a43481286d21a7942ec0a19f14f1525739081c8dcfc723d00716195f00\": container with ID starting with cddd25a43481286d21a7942ec0a19f14f1525739081c8dcfc723d00716195f00 not found: ID does not exist" containerID="cddd25a43481286d21a7942ec0a19f14f1525739081c8dcfc723d00716195f00" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.023847 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cddd25a43481286d21a7942ec0a19f14f1525739081c8dcfc723d00716195f00"} err="failed to get container status \"cddd25a43481286d21a7942ec0a19f14f1525739081c8dcfc723d00716195f00\": rpc error: code = NotFound desc = could not find container \"cddd25a43481286d21a7942ec0a19f14f1525739081c8dcfc723d00716195f00\": container with ID starting with cddd25a43481286d21a7942ec0a19f14f1525739081c8dcfc723d00716195f00 not found: ID does not exist" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.074465 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqng4\" (UniqueName: \"kubernetes.io/projected/5105347b-2714-4def-a8e9-8f2e72aa6a0e-kube-api-access-xqng4\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.074516 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5105347b-2714-4def-a8e9-8f2e72aa6a0e-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.074537 4775 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c-logs\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.074554 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tptv7\" (UniqueName: \"kubernetes.io/projected/5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c-kube-api-access-tptv7\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.074577 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.074595 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5105347b-2714-4def-a8e9-8f2e72aa6a0e-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.291424 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.316573 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.329388 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:36:36 crc kubenswrapper[4775]: E0123 14:36:36.329892 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5105347b-2714-4def-a8e9-8f2e72aa6a0e" containerName="nova-kuttl-api-log" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.329920 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="5105347b-2714-4def-a8e9-8f2e72aa6a0e" containerName="nova-kuttl-api-log" Jan 23 14:36:36 crc kubenswrapper[4775]: E0123 14:36:36.329950 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c" containerName="nova-kuttl-metadata-log" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.329958 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c" containerName="nova-kuttl-metadata-log" Jan 23 14:36:36 crc kubenswrapper[4775]: E0123 14:36:36.329975 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5105347b-2714-4def-a8e9-8f2e72aa6a0e" containerName="nova-kuttl-api-api" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.329984 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="5105347b-2714-4def-a8e9-8f2e72aa6a0e" containerName="nova-kuttl-api-api" Jan 23 14:36:36 crc kubenswrapper[4775]: E0123 14:36:36.330004 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ef19dc5-1d78-479c-8220-340c46c44bdf" containerName="nova-manage" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.330012 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ef19dc5-1d78-479c-8220-340c46c44bdf" containerName="nova-manage" Jan 23 14:36:36 crc kubenswrapper[4775]: E0123 14:36:36.330022 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c" containerName="nova-kuttl-metadata-metadata" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.330030 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c" containerName="nova-kuttl-metadata-metadata" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.330208 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c" containerName="nova-kuttl-metadata-metadata" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.330257 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="5105347b-2714-4def-a8e9-8f2e72aa6a0e" containerName="nova-kuttl-api-api" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.330276 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="5105347b-2714-4def-a8e9-8f2e72aa6a0e" containerName="nova-kuttl-api-log" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.330292 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c" containerName="nova-kuttl-metadata-log" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.330311 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ef19dc5-1d78-479c-8220-340c46c44bdf" containerName="nova-manage" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.333015 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.336480 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-metadata-config-data" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.368634 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.386932 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.394655 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.408119 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.410720 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.417025 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-api-config-data" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.418562 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.490130 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72d0a843-11de-43a6-9c92-6a65a6d406ec-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"72d0a843-11de-43a6-9c92-6a65a6d406ec\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.490196 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72d0a843-11de-43a6-9c92-6a65a6d406ec-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"72d0a843-11de-43a6-9c92-6a65a6d406ec\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.490716 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bglpx\" (UniqueName: \"kubernetes.io/projected/72d0a843-11de-43a6-9c92-6a65a6d406ec-kube-api-access-bglpx\") pod \"nova-kuttl-metadata-0\" (UID: \"72d0a843-11de-43a6-9c92-6a65a6d406ec\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.592838 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j849m\" (UniqueName: \"kubernetes.io/projected/56066bf2-4408-46e5-8df0-6ce62447bf2a-kube-api-access-j849m\") pod \"nova-kuttl-api-0\" (UID: \"56066bf2-4408-46e5-8df0-6ce62447bf2a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.592903 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bglpx\" (UniqueName: \"kubernetes.io/projected/72d0a843-11de-43a6-9c92-6a65a6d406ec-kube-api-access-bglpx\") pod \"nova-kuttl-metadata-0\" (UID: \"72d0a843-11de-43a6-9c92-6a65a6d406ec\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.592950 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56066bf2-4408-46e5-8df0-6ce62447bf2a-config-data\") pod \"nova-kuttl-api-0\" (UID: \"56066bf2-4408-46e5-8df0-6ce62447bf2a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.593012 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72d0a843-11de-43a6-9c92-6a65a6d406ec-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"72d0a843-11de-43a6-9c92-6a65a6d406ec\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.593047 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72d0a843-11de-43a6-9c92-6a65a6d406ec-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"72d0a843-11de-43a6-9c92-6a65a6d406ec\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.593082 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56066bf2-4408-46e5-8df0-6ce62447bf2a-logs\") pod \"nova-kuttl-api-0\" (UID: \"56066bf2-4408-46e5-8df0-6ce62447bf2a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.593626 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72d0a843-11de-43a6-9c92-6a65a6d406ec-logs\") pod \"nova-kuttl-metadata-0\" (UID: \"72d0a843-11de-43a6-9c92-6a65a6d406ec\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.598368 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72d0a843-11de-43a6-9c92-6a65a6d406ec-config-data\") pod \"nova-kuttl-metadata-0\" (UID: \"72d0a843-11de-43a6-9c92-6a65a6d406ec\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.622426 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bglpx\" (UniqueName: \"kubernetes.io/projected/72d0a843-11de-43a6-9c92-6a65a6d406ec-kube-api-access-bglpx\") pod \"nova-kuttl-metadata-0\" (UID: \"72d0a843-11de-43a6-9c92-6a65a6d406ec\") " pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.695006 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j849m\" (UniqueName: \"kubernetes.io/projected/56066bf2-4408-46e5-8df0-6ce62447bf2a-kube-api-access-j849m\") pod \"nova-kuttl-api-0\" (UID: \"56066bf2-4408-46e5-8df0-6ce62447bf2a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.695351 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56066bf2-4408-46e5-8df0-6ce62447bf2a-config-data\") pod \"nova-kuttl-api-0\" (UID: \"56066bf2-4408-46e5-8df0-6ce62447bf2a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.695411 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56066bf2-4408-46e5-8df0-6ce62447bf2a-logs\") pod \"nova-kuttl-api-0\" (UID: \"56066bf2-4408-46e5-8df0-6ce62447bf2a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.695964 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56066bf2-4408-46e5-8df0-6ce62447bf2a-logs\") pod \"nova-kuttl-api-0\" (UID: \"56066bf2-4408-46e5-8df0-6ce62447bf2a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.701434 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56066bf2-4408-46e5-8df0-6ce62447bf2a-config-data\") pod \"nova-kuttl-api-0\" (UID: \"56066bf2-4408-46e5-8df0-6ce62447bf2a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.716453 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j849m\" (UniqueName: \"kubernetes.io/projected/56066bf2-4408-46e5-8df0-6ce62447bf2a-kube-api-access-j849m\") pod \"nova-kuttl-api-0\" (UID: \"56066bf2-4408-46e5-8df0-6ce62447bf2a\") " pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.745498 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.753134 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.954420 4775 generic.go:334] "Generic (PLEG): container finished" podID="a43d5dd7-2b7b-4806-b358-976cf374cd43" containerID="37f72809eb5ac9ef19d9f3238fb00e4dda525d2962892965c464bb0691074a87" exitCode=0 Jan 23 14:36:36 crc kubenswrapper[4775]: I0123 14:36:36.954488 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"a43d5dd7-2b7b-4806-b358-976cf374cd43","Type":"ContainerDied","Data":"37f72809eb5ac9ef19d9f3238fb00e4dda525d2962892965c464bb0691074a87"} Jan 23 14:36:37 crc kubenswrapper[4775]: I0123 14:36:37.017472 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:37 crc kubenswrapper[4775]: I0123 14:36:37.102603 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a43d5dd7-2b7b-4806-b358-976cf374cd43-config-data\") pod \"a43d5dd7-2b7b-4806-b358-976cf374cd43\" (UID: \"a43d5dd7-2b7b-4806-b358-976cf374cd43\") " Jan 23 14:36:37 crc kubenswrapper[4775]: I0123 14:36:37.102675 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsvjz\" (UniqueName: \"kubernetes.io/projected/a43d5dd7-2b7b-4806-b358-976cf374cd43-kube-api-access-qsvjz\") pod \"a43d5dd7-2b7b-4806-b358-976cf374cd43\" (UID: \"a43d5dd7-2b7b-4806-b358-976cf374cd43\") " Jan 23 14:36:37 crc kubenswrapper[4775]: I0123 14:36:37.108298 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a43d5dd7-2b7b-4806-b358-976cf374cd43-kube-api-access-qsvjz" (OuterVolumeSpecName: "kube-api-access-qsvjz") pod "a43d5dd7-2b7b-4806-b358-976cf374cd43" (UID: "a43d5dd7-2b7b-4806-b358-976cf374cd43"). InnerVolumeSpecName "kube-api-access-qsvjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:36:37 crc kubenswrapper[4775]: I0123 14:36:37.124600 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a43d5dd7-2b7b-4806-b358-976cf374cd43-config-data" (OuterVolumeSpecName: "config-data") pod "a43d5dd7-2b7b-4806-b358-976cf374cd43" (UID: "a43d5dd7-2b7b-4806-b358-976cf374cd43"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:36:37 crc kubenswrapper[4775]: I0123 14:36:37.203881 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a43d5dd7-2b7b-4806-b358-976cf374cd43-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:37 crc kubenswrapper[4775]: I0123 14:36:37.203909 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsvjz\" (UniqueName: \"kubernetes.io/projected/a43d5dd7-2b7b-4806-b358-976cf374cd43-kube-api-access-qsvjz\") on node \"crc\" DevicePath \"\"" Jan 23 14:36:37 crc kubenswrapper[4775]: I0123 14:36:37.240238 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-metadata-0"] Jan 23 14:36:37 crc kubenswrapper[4775]: W0123 14:36:37.248091 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72d0a843_11de_43a6_9c92_6a65a6d406ec.slice/crio-abf5418a3ef69bff011b9eab0c3bd63d50df19032ceef2f17236f9c5ab52f00d WatchSource:0}: Error finding container abf5418a3ef69bff011b9eab0c3bd63d50df19032ceef2f17236f9c5ab52f00d: Status 404 returned error can't find the container with id abf5418a3ef69bff011b9eab0c3bd63d50df19032ceef2f17236f9c5ab52f00d Jan 23 14:36:37 crc kubenswrapper[4775]: I0123 14:36:37.325286 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-api-0"] Jan 23 14:36:37 crc kubenswrapper[4775]: W0123 14:36:37.353034 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56066bf2_4408_46e5_8df0_6ce62447bf2a.slice/crio-5c484728c59dbbc65de7635caa9ddef878b4aaaadf19ad5a8eddad464e1f9152 WatchSource:0}: Error finding container 5c484728c59dbbc65de7635caa9ddef878b4aaaadf19ad5a8eddad464e1f9152: Status 404 returned error can't find the container with id 5c484728c59dbbc65de7635caa9ddef878b4aaaadf19ad5a8eddad464e1f9152 Jan 23 14:36:37 crc kubenswrapper[4775]: I0123 14:36:37.723700 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5105347b-2714-4def-a8e9-8f2e72aa6a0e" path="/var/lib/kubelet/pods/5105347b-2714-4def-a8e9-8f2e72aa6a0e/volumes" Jan 23 14:36:37 crc kubenswrapper[4775]: I0123 14:36:37.726192 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c" path="/var/lib/kubelet/pods/5dd9eb2a-b8f2-46dc-bf7e-84a3ed13464c/volumes" Jan 23 14:36:37 crc kubenswrapper[4775]: I0123 14:36:37.967243 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"72d0a843-11de-43a6-9c92-6a65a6d406ec","Type":"ContainerStarted","Data":"e49293d26a1d43b1b80a74401fb27db477b1288d691ee579a23570aa8c32f3bb"} Jan 23 14:36:37 crc kubenswrapper[4775]: I0123 14:36:37.967292 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"72d0a843-11de-43a6-9c92-6a65a6d406ec","Type":"ContainerStarted","Data":"3ccd39a9c93904f3782dbc53718b2100dfaad158606e6a6bc627284ffe59845d"} Jan 23 14:36:37 crc kubenswrapper[4775]: I0123 14:36:37.967308 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-metadata-0" event={"ID":"72d0a843-11de-43a6-9c92-6a65a6d406ec","Type":"ContainerStarted","Data":"abf5418a3ef69bff011b9eab0c3bd63d50df19032ceef2f17236f9c5ab52f00d"} Jan 23 14:36:37 crc kubenswrapper[4775]: I0123 14:36:37.969544 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"a43d5dd7-2b7b-4806-b358-976cf374cd43","Type":"ContainerDied","Data":"5ab988d9e205a230e0cf678a8c867f1fb120bd7bcf1bf799de80ca2df82b80ed"} Jan 23 14:36:37 crc kubenswrapper[4775]: I0123 14:36:37.969574 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:37 crc kubenswrapper[4775]: I0123 14:36:37.969618 4775 scope.go:117] "RemoveContainer" containerID="37f72809eb5ac9ef19d9f3238fb00e4dda525d2962892965c464bb0691074a87" Jan 23 14:36:37 crc kubenswrapper[4775]: I0123 14:36:37.972319 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"56066bf2-4408-46e5-8df0-6ce62447bf2a","Type":"ContainerStarted","Data":"d22b7410bec1d7df615f28412d9ac6a6d8a1918ed885d07df4ab369c8e80dddf"} Jan 23 14:36:37 crc kubenswrapper[4775]: I0123 14:36:37.972406 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"56066bf2-4408-46e5-8df0-6ce62447bf2a","Type":"ContainerStarted","Data":"a10fcd98beba1aa98e14b18f127100ea010a9b336b358c4d59126c39e3ef3c78"} Jan 23 14:36:37 crc kubenswrapper[4775]: I0123 14:36:37.972428 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-api-0" event={"ID":"56066bf2-4408-46e5-8df0-6ce62447bf2a","Type":"ContainerStarted","Data":"5c484728c59dbbc65de7635caa9ddef878b4aaaadf19ad5a8eddad464e1f9152"} Jan 23 14:36:38 crc kubenswrapper[4775]: I0123 14:36:38.016029 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-metadata-0" podStartSLOduration=2.016008286 podStartE2EDuration="2.016008286s" podCreationTimestamp="2026-01-23 14:36:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:36:37.992093982 +0000 UTC m=+1944.986922752" watchObservedRunningTime="2026-01-23 14:36:38.016008286 +0000 UTC m=+1945.010837046" Jan 23 14:36:38 crc kubenswrapper[4775]: I0123 14:36:38.020691 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-api-0" podStartSLOduration=2.020671805 podStartE2EDuration="2.020671805s" podCreationTimestamp="2026-01-23 14:36:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:36:38.012109157 +0000 UTC m=+1945.006937917" watchObservedRunningTime="2026-01-23 14:36:38.020671805 +0000 UTC m=+1945.015500565" Jan 23 14:36:38 crc kubenswrapper[4775]: I0123 14:36:38.032838 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:36:38 crc kubenswrapper[4775]: I0123 14:36:38.056265 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:36:38 crc kubenswrapper[4775]: I0123 14:36:38.063367 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:36:38 crc kubenswrapper[4775]: E0123 14:36:38.063800 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a43d5dd7-2b7b-4806-b358-976cf374cd43" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:36:38 crc kubenswrapper[4775]: I0123 14:36:38.063841 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="a43d5dd7-2b7b-4806-b358-976cf374cd43" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:36:38 crc kubenswrapper[4775]: I0123 14:36:38.064062 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="a43d5dd7-2b7b-4806-b358-976cf374cd43" containerName="nova-kuttl-scheduler-scheduler" Jan 23 14:36:38 crc kubenswrapper[4775]: I0123 14:36:38.064683 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:38 crc kubenswrapper[4775]: I0123 14:36:38.067564 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-scheduler-config-data" Jan 23 14:36:38 crc kubenswrapper[4775]: I0123 14:36:38.070119 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:36:38 crc kubenswrapper[4775]: I0123 14:36:38.221167 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdfa6b38-3f0a-4f8e-9bd4-ec3907a919f0-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"bdfa6b38-3f0a-4f8e-9bd4-ec3907a919f0\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:38 crc kubenswrapper[4775]: I0123 14:36:38.221297 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j85p6\" (UniqueName: \"kubernetes.io/projected/bdfa6b38-3f0a-4f8e-9bd4-ec3907a919f0-kube-api-access-j85p6\") pod \"nova-kuttl-scheduler-0\" (UID: \"bdfa6b38-3f0a-4f8e-9bd4-ec3907a919f0\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:38 crc kubenswrapper[4775]: I0123 14:36:38.322953 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdfa6b38-3f0a-4f8e-9bd4-ec3907a919f0-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"bdfa6b38-3f0a-4f8e-9bd4-ec3907a919f0\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:38 crc kubenswrapper[4775]: I0123 14:36:38.323070 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j85p6\" (UniqueName: \"kubernetes.io/projected/bdfa6b38-3f0a-4f8e-9bd4-ec3907a919f0-kube-api-access-j85p6\") pod \"nova-kuttl-scheduler-0\" (UID: \"bdfa6b38-3f0a-4f8e-9bd4-ec3907a919f0\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:38 crc kubenswrapper[4775]: I0123 14:36:38.327717 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdfa6b38-3f0a-4f8e-9bd4-ec3907a919f0-config-data\") pod \"nova-kuttl-scheduler-0\" (UID: \"bdfa6b38-3f0a-4f8e-9bd4-ec3907a919f0\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:38 crc kubenswrapper[4775]: I0123 14:36:38.338859 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j85p6\" (UniqueName: \"kubernetes.io/projected/bdfa6b38-3f0a-4f8e-9bd4-ec3907a919f0-kube-api-access-j85p6\") pod \"nova-kuttl-scheduler-0\" (UID: \"bdfa6b38-3f0a-4f8e-9bd4-ec3907a919f0\") " pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:38 crc kubenswrapper[4775]: I0123 14:36:38.392519 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:38 crc kubenswrapper[4775]: I0123 14:36:38.720734 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-scheduler-0"] Jan 23 14:36:38 crc kubenswrapper[4775]: W0123 14:36:38.724962 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdfa6b38_3f0a_4f8e_9bd4_ec3907a919f0.slice/crio-ba3441fa5fbd7fb0e40e27e177aa55e82097f7b1c32b895c5263296a07da0bca WatchSource:0}: Error finding container ba3441fa5fbd7fb0e40e27e177aa55e82097f7b1c32b895c5263296a07da0bca: Status 404 returned error can't find the container with id ba3441fa5fbd7fb0e40e27e177aa55e82097f7b1c32b895c5263296a07da0bca Jan 23 14:36:38 crc kubenswrapper[4775]: I0123 14:36:38.986182 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"bdfa6b38-3f0a-4f8e-9bd4-ec3907a919f0","Type":"ContainerStarted","Data":"0624c3c816b60799e510395c36653db02ae4c7a578578b24e14fa96b3ff92dec"} Jan 23 14:36:38 crc kubenswrapper[4775]: I0123 14:36:38.986646 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-scheduler-0" event={"ID":"bdfa6b38-3f0a-4f8e-9bd4-ec3907a919f0","Type":"ContainerStarted","Data":"ba3441fa5fbd7fb0e40e27e177aa55e82097f7b1c32b895c5263296a07da0bca"} Jan 23 14:36:39 crc kubenswrapper[4775]: I0123 14:36:39.012854 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-scheduler-0" podStartSLOduration=1.012797291 podStartE2EDuration="1.012797291s" podCreationTimestamp="2026-01-23 14:36:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:36:39.003709218 +0000 UTC m=+1945.998537948" watchObservedRunningTime="2026-01-23 14:36:39.012797291 +0000 UTC m=+1946.007626081" Jan 23 14:36:39 crc kubenswrapper[4775]: I0123 14:36:39.730232 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a43d5dd7-2b7b-4806-b358-976cf374cd43" path="/var/lib/kubelet/pods/a43d5dd7-2b7b-4806-b358-976cf374cd43/volumes" Jan 23 14:36:41 crc kubenswrapper[4775]: I0123 14:36:41.746510 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:41 crc kubenswrapper[4775]: I0123 14:36:41.746599 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:43 crc kubenswrapper[4775]: I0123 14:36:43.393138 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:46 crc kubenswrapper[4775]: I0123 14:36:46.746453 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:46 crc kubenswrapper[4775]: I0123 14:36:46.747267 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:46 crc kubenswrapper[4775]: I0123 14:36:46.753969 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:46 crc kubenswrapper[4775]: I0123 14:36:46.754033 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:47 crc kubenswrapper[4775]: I0123 14:36:47.910976 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="56066bf2-4408-46e5-8df0-6ce62447bf2a" containerName="nova-kuttl-api-log" probeResult="failure" output="Get \"http://10.217.0.231:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:36:47 crc kubenswrapper[4775]: I0123 14:36:47.911043 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-api-0" podUID="56066bf2-4408-46e5-8df0-6ce62447bf2a" containerName="nova-kuttl-api-api" probeResult="failure" output="Get \"http://10.217.0.231:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:36:47 crc kubenswrapper[4775]: I0123 14:36:47.911048 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="72d0a843-11de-43a6-9c92-6a65a6d406ec" containerName="nova-kuttl-metadata-metadata" probeResult="failure" output="Get \"http://10.217.0.230:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:36:47 crc kubenswrapper[4775]: I0123 14:36:47.910969 4775 prober.go:107] "Probe failed" probeType="Startup" pod="nova-kuttl-default/nova-kuttl-metadata-0" podUID="72d0a843-11de-43a6-9c92-6a65a6d406ec" containerName="nova-kuttl-metadata-log" probeResult="failure" output="Get \"http://10.217.0.230:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 14:36:48 crc kubenswrapper[4775]: I0123 14:36:48.393746 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:48 crc kubenswrapper[4775]: I0123 14:36:48.448281 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:49 crc kubenswrapper[4775]: I0123 14:36:49.138929 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-scheduler-0" Jan 23 14:36:53 crc kubenswrapper[4775]: I0123 14:36:53.219274 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:36:53 crc kubenswrapper[4775]: I0123 14:36:53.219891 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:36:56 crc kubenswrapper[4775]: I0123 14:36:56.750532 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:56 crc kubenswrapper[4775]: I0123 14:36:56.751016 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:56 crc kubenswrapper[4775]: I0123 14:36:56.753753 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:56 crc kubenswrapper[4775]: I0123 14:36:56.753962 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-metadata-0" Jan 23 14:36:56 crc kubenswrapper[4775]: I0123 14:36:56.759239 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:56 crc kubenswrapper[4775]: I0123 14:36:56.759822 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:56 crc kubenswrapper[4775]: I0123 14:36:56.763779 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:56 crc kubenswrapper[4775]: I0123 14:36:56.767050 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:57 crc kubenswrapper[4775]: I0123 14:36:57.195427 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:57 crc kubenswrapper[4775]: I0123 14:36:57.199726 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="nova-kuttl-default/nova-kuttl-api-0" Jan 23 14:36:58 crc kubenswrapper[4775]: I0123 14:36:58.836594 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz"] Jan 23 14:36:58 crc kubenswrapper[4775]: I0123 14:36:58.838086 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" Jan 23 14:36:58 crc kubenswrapper[4775]: I0123 14:36:58.840367 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-config-data" Jan 23 14:36:58 crc kubenswrapper[4775]: I0123 14:36:58.840985 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-scripts" Jan 23 14:36:58 crc kubenswrapper[4775]: I0123 14:36:58.847854 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz"] Jan 23 14:36:58 crc kubenswrapper[4775]: I0123 14:36:58.910616 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e8f7bb4-6671-4ef8-b35a-45059af73b01-config-data\") pod \"nova-kuttl-cell1-cell-delete-w7tbz\" (UID: \"9e8f7bb4-6671-4ef8-b35a-45059af73b01\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" Jan 23 14:36:58 crc kubenswrapper[4775]: I0123 14:36:58.910695 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn495\" (UniqueName: \"kubernetes.io/projected/9e8f7bb4-6671-4ef8-b35a-45059af73b01-kube-api-access-rn495\") pod \"nova-kuttl-cell1-cell-delete-w7tbz\" (UID: \"9e8f7bb4-6671-4ef8-b35a-45059af73b01\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" Jan 23 14:36:58 crc kubenswrapper[4775]: I0123 14:36:58.910777 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e8f7bb4-6671-4ef8-b35a-45059af73b01-scripts\") pod \"nova-kuttl-cell1-cell-delete-w7tbz\" (UID: \"9e8f7bb4-6671-4ef8-b35a-45059af73b01\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" Jan 23 14:36:59 crc kubenswrapper[4775]: I0123 14:36:59.012265 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e8f7bb4-6671-4ef8-b35a-45059af73b01-config-data\") pod \"nova-kuttl-cell1-cell-delete-w7tbz\" (UID: \"9e8f7bb4-6671-4ef8-b35a-45059af73b01\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" Jan 23 14:36:59 crc kubenswrapper[4775]: I0123 14:36:59.012366 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn495\" (UniqueName: \"kubernetes.io/projected/9e8f7bb4-6671-4ef8-b35a-45059af73b01-kube-api-access-rn495\") pod \"nova-kuttl-cell1-cell-delete-w7tbz\" (UID: \"9e8f7bb4-6671-4ef8-b35a-45059af73b01\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" Jan 23 14:36:59 crc kubenswrapper[4775]: I0123 14:36:59.012408 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e8f7bb4-6671-4ef8-b35a-45059af73b01-scripts\") pod \"nova-kuttl-cell1-cell-delete-w7tbz\" (UID: \"9e8f7bb4-6671-4ef8-b35a-45059af73b01\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" Jan 23 14:36:59 crc kubenswrapper[4775]: I0123 14:36:59.021163 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e8f7bb4-6671-4ef8-b35a-45059af73b01-scripts\") pod \"nova-kuttl-cell1-cell-delete-w7tbz\" (UID: \"9e8f7bb4-6671-4ef8-b35a-45059af73b01\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" Jan 23 14:36:59 crc kubenswrapper[4775]: I0123 14:36:59.021618 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e8f7bb4-6671-4ef8-b35a-45059af73b01-config-data\") pod \"nova-kuttl-cell1-cell-delete-w7tbz\" (UID: \"9e8f7bb4-6671-4ef8-b35a-45059af73b01\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" Jan 23 14:36:59 crc kubenswrapper[4775]: I0123 14:36:59.042386 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn495\" (UniqueName: \"kubernetes.io/projected/9e8f7bb4-6671-4ef8-b35a-45059af73b01-kube-api-access-rn495\") pod \"nova-kuttl-cell1-cell-delete-w7tbz\" (UID: \"9e8f7bb4-6671-4ef8-b35a-45059af73b01\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" Jan 23 14:36:59 crc kubenswrapper[4775]: I0123 14:36:59.195345 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" Jan 23 14:36:59 crc kubenswrapper[4775]: I0123 14:36:59.662317 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz"] Jan 23 14:37:00 crc kubenswrapper[4775]: I0123 14:37:00.225070 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" event={"ID":"9e8f7bb4-6671-4ef8-b35a-45059af73b01","Type":"ContainerStarted","Data":"b161842b5aa3dedc238e1aa217c4cd2d9623581d6c65e953c9e5fd5b44556ad4"} Jan 23 14:37:00 crc kubenswrapper[4775]: I0123 14:37:00.226473 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" event={"ID":"9e8f7bb4-6671-4ef8-b35a-45059af73b01","Type":"ContainerStarted","Data":"27562d541f20254a2f84db2c1a11a1410fb6f2f590a4c41036a86757dd88cf6b"} Jan 23 14:37:04 crc kubenswrapper[4775]: I0123 14:37:04.269004 4775 generic.go:334] "Generic (PLEG): container finished" podID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerID="b161842b5aa3dedc238e1aa217c4cd2d9623581d6c65e953c9e5fd5b44556ad4" exitCode=2 Jan 23 14:37:04 crc kubenswrapper[4775]: I0123 14:37:04.269611 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" event={"ID":"9e8f7bb4-6671-4ef8-b35a-45059af73b01","Type":"ContainerDied","Data":"b161842b5aa3dedc238e1aa217c4cd2d9623581d6c65e953c9e5fd5b44556ad4"} Jan 23 14:37:04 crc kubenswrapper[4775]: I0123 14:37:04.270147 4775 scope.go:117] "RemoveContainer" containerID="b161842b5aa3dedc238e1aa217c4cd2d9623581d6c65e953c9e5fd5b44556ad4" Jan 23 14:37:05 crc kubenswrapper[4775]: I0123 14:37:05.285410 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" event={"ID":"9e8f7bb4-6671-4ef8-b35a-45059af73b01","Type":"ContainerStarted","Data":"89cbb6be44ac6789a13ccd94ec2a5eb30f51a2000020301a4257579f65175f25"} Jan 23 14:37:05 crc kubenswrapper[4775]: I0123 14:37:05.324223 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podStartSLOduration=7.324198795 podStartE2EDuration="7.324198795s" podCreationTimestamp="2026-01-23 14:36:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:37:00.249576435 +0000 UTC m=+1967.244405185" watchObservedRunningTime="2026-01-23 14:37:05.324198795 +0000 UTC m=+1972.319027565" Jan 23 14:37:09 crc kubenswrapper[4775]: I0123 14:37:09.360548 4775 generic.go:334] "Generic (PLEG): container finished" podID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerID="89cbb6be44ac6789a13ccd94ec2a5eb30f51a2000020301a4257579f65175f25" exitCode=2 Jan 23 14:37:09 crc kubenswrapper[4775]: I0123 14:37:09.360635 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" event={"ID":"9e8f7bb4-6671-4ef8-b35a-45059af73b01","Type":"ContainerDied","Data":"89cbb6be44ac6789a13ccd94ec2a5eb30f51a2000020301a4257579f65175f25"} Jan 23 14:37:09 crc kubenswrapper[4775]: I0123 14:37:09.362860 4775 scope.go:117] "RemoveContainer" containerID="b161842b5aa3dedc238e1aa217c4cd2d9623581d6c65e953c9e5fd5b44556ad4" Jan 23 14:37:09 crc kubenswrapper[4775]: I0123 14:37:09.363684 4775 scope.go:117] "RemoveContainer" containerID="89cbb6be44ac6789a13ccd94ec2a5eb30f51a2000020301a4257579f65175f25" Jan 23 14:37:09 crc kubenswrapper[4775]: E0123 14:37:09.364359 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 10s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:37:21 crc kubenswrapper[4775]: I0123 14:37:21.714775 4775 scope.go:117] "RemoveContainer" containerID="89cbb6be44ac6789a13ccd94ec2a5eb30f51a2000020301a4257579f65175f25" Jan 23 14:37:22 crc kubenswrapper[4775]: I0123 14:37:22.511730 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" event={"ID":"9e8f7bb4-6671-4ef8-b35a-45059af73b01","Type":"ContainerStarted","Data":"c2686065ea0fd21e09216b2752bdc5ea00d6bff72a52304fb3c1e24866cf35b9"} Jan 23 14:37:23 crc kubenswrapper[4775]: I0123 14:37:23.219182 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:37:23 crc kubenswrapper[4775]: I0123 14:37:23.219281 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:37:26 crc kubenswrapper[4775]: I0123 14:37:26.560654 4775 generic.go:334] "Generic (PLEG): container finished" podID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerID="c2686065ea0fd21e09216b2752bdc5ea00d6bff72a52304fb3c1e24866cf35b9" exitCode=2 Jan 23 14:37:26 crc kubenswrapper[4775]: I0123 14:37:26.560771 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" event={"ID":"9e8f7bb4-6671-4ef8-b35a-45059af73b01","Type":"ContainerDied","Data":"c2686065ea0fd21e09216b2752bdc5ea00d6bff72a52304fb3c1e24866cf35b9"} Jan 23 14:37:26 crc kubenswrapper[4775]: I0123 14:37:26.561268 4775 scope.go:117] "RemoveContainer" containerID="89cbb6be44ac6789a13ccd94ec2a5eb30f51a2000020301a4257579f65175f25" Jan 23 14:37:26 crc kubenswrapper[4775]: I0123 14:37:26.562455 4775 scope.go:117] "RemoveContainer" containerID="c2686065ea0fd21e09216b2752bdc5ea00d6bff72a52304fb3c1e24866cf35b9" Jan 23 14:37:26 crc kubenswrapper[4775]: E0123 14:37:26.563190 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:37:37 crc kubenswrapper[4775]: I0123 14:37:37.713749 4775 scope.go:117] "RemoveContainer" containerID="c2686065ea0fd21e09216b2752bdc5ea00d6bff72a52304fb3c1e24866cf35b9" Jan 23 14:37:37 crc kubenswrapper[4775]: E0123 14:37:37.714646 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:37:47 crc kubenswrapper[4775]: I0123 14:37:47.533219 4775 scope.go:117] "RemoveContainer" containerID="827309d081a52f2f4fbdc446573f9dbf6756c3faef728c7a3ede91f774184851" Jan 23 14:37:47 crc kubenswrapper[4775]: I0123 14:37:47.592611 4775 scope.go:117] "RemoveContainer" containerID="4f1cabf38bb4ec4b946564e2b7accc422c82ed3dca66b33da4fca4b19d4c5643" Jan 23 14:37:47 crc kubenswrapper[4775]: I0123 14:37:47.640648 4775 scope.go:117] "RemoveContainer" containerID="552a75aff373d33848d323f4e1a099464b0ab75b386e7916291405fa3aa8b333" Jan 23 14:37:47 crc kubenswrapper[4775]: I0123 14:37:47.674569 4775 scope.go:117] "RemoveContainer" containerID="ecad2940c2ff1569920921fdd03a6c333edaa15c5f0818afcf6db854f924e5ab" Jan 23 14:37:47 crc kubenswrapper[4775]: I0123 14:37:47.718511 4775 scope.go:117] "RemoveContainer" containerID="46c83cc2befa55d2730e0306d1a537315368a038fa5d8e25f6f9a9178ae4909d" Jan 23 14:37:47 crc kubenswrapper[4775]: I0123 14:37:47.761878 4775 scope.go:117] "RemoveContainer" containerID="799ce1823863a3c15c53a4d22727a916392492bc10d370e2462dbc8b6ea31ac8" Jan 23 14:37:47 crc kubenswrapper[4775]: I0123 14:37:47.799577 4775 scope.go:117] "RemoveContainer" containerID="36da3a3e665fb3823516d8d90857086698e0e37c43b293f38337204d81ca04a2" Jan 23 14:37:47 crc kubenswrapper[4775]: I0123 14:37:47.835683 4775 scope.go:117] "RemoveContainer" containerID="f3d6d9e6a7043cb32f7f7ac11281394b9efc64f38742f080cf771797930a3cc3" Jan 23 14:37:47 crc kubenswrapper[4775]: I0123 14:37:47.889324 4775 scope.go:117] "RemoveContainer" containerID="33a99232a0ae7d230c0ca5e3a7fcc4bde1520167a1ceba4a466d07976af3e8d1" Jan 23 14:37:47 crc kubenswrapper[4775]: I0123 14:37:47.909087 4775 scope.go:117] "RemoveContainer" containerID="2079dfd1f90a546b48b0adf5addfe5584632a67d75d8c2a2dfabd83d3cfc9c6f" Jan 23 14:37:47 crc kubenswrapper[4775]: I0123 14:37:47.969601 4775 scope.go:117] "RemoveContainer" containerID="6d2aa10a47d2fcb45e935313a220958ccb5ce5c86f680afa48a823e4a53178f0" Jan 23 14:37:47 crc kubenswrapper[4775]: I0123 14:37:47.997574 4775 scope.go:117] "RemoveContainer" containerID="7683bb31e0e3c33c12802ae8ef8cb905ee4053a0b8cff940fda829caf0802a6a" Jan 23 14:37:49 crc kubenswrapper[4775]: I0123 14:37:49.714259 4775 scope.go:117] "RemoveContainer" containerID="c2686065ea0fd21e09216b2752bdc5ea00d6bff72a52304fb3c1e24866cf35b9" Jan 23 14:37:50 crc kubenswrapper[4775]: I0123 14:37:50.871724 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" event={"ID":"9e8f7bb4-6671-4ef8-b35a-45059af73b01","Type":"ContainerStarted","Data":"c96962219cc02cf6545d8daad2e49166d04ca29a855c1f10fa34771111704ad2"} Jan 23 14:37:53 crc kubenswrapper[4775]: I0123 14:37:53.219015 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:37:53 crc kubenswrapper[4775]: I0123 14:37:53.219089 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:37:53 crc kubenswrapper[4775]: I0123 14:37:53.219144 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:37:53 crc kubenswrapper[4775]: I0123 14:37:53.220037 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d3d96378db42c2ddc5100447e504efd5667272c1b57105f220bac9f07cfe29ce"} pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 14:37:53 crc kubenswrapper[4775]: I0123 14:37:53.220109 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" containerID="cri-o://d3d96378db42c2ddc5100447e504efd5667272c1b57105f220bac9f07cfe29ce" gracePeriod=600 Jan 23 14:37:53 crc kubenswrapper[4775]: I0123 14:37:53.916681 4775 generic.go:334] "Generic (PLEG): container finished" podID="4fea0767-0566-4214-855d-ed0373946271" containerID="d3d96378db42c2ddc5100447e504efd5667272c1b57105f220bac9f07cfe29ce" exitCode=0 Jan 23 14:37:53 crc kubenswrapper[4775]: I0123 14:37:53.917288 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerDied","Data":"d3d96378db42c2ddc5100447e504efd5667272c1b57105f220bac9f07cfe29ce"} Jan 23 14:37:53 crc kubenswrapper[4775]: I0123 14:37:53.917319 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerStarted","Data":"607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d"} Jan 23 14:37:53 crc kubenswrapper[4775]: I0123 14:37:53.917338 4775 scope.go:117] "RemoveContainer" containerID="69352c685886c633ea6d0b537597dc4c75f21afb213d286cff0fe72c9a4c5342" Jan 23 14:37:54 crc kubenswrapper[4775]: I0123 14:37:54.935632 4775 generic.go:334] "Generic (PLEG): container finished" podID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerID="c96962219cc02cf6545d8daad2e49166d04ca29a855c1f10fa34771111704ad2" exitCode=2 Jan 23 14:37:54 crc kubenswrapper[4775]: I0123 14:37:54.935719 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" event={"ID":"9e8f7bb4-6671-4ef8-b35a-45059af73b01","Type":"ContainerDied","Data":"c96962219cc02cf6545d8daad2e49166d04ca29a855c1f10fa34771111704ad2"} Jan 23 14:37:54 crc kubenswrapper[4775]: I0123 14:37:54.936193 4775 scope.go:117] "RemoveContainer" containerID="c2686065ea0fd21e09216b2752bdc5ea00d6bff72a52304fb3c1e24866cf35b9" Jan 23 14:37:54 crc kubenswrapper[4775]: I0123 14:37:54.936983 4775 scope.go:117] "RemoveContainer" containerID="c96962219cc02cf6545d8daad2e49166d04ca29a855c1f10fa34771111704ad2" Jan 23 14:37:54 crc kubenswrapper[4775]: E0123 14:37:54.937338 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:38:07 crc kubenswrapper[4775]: I0123 14:38:07.715723 4775 scope.go:117] "RemoveContainer" containerID="c96962219cc02cf6545d8daad2e49166d04ca29a855c1f10fa34771111704ad2" Jan 23 14:38:07 crc kubenswrapper[4775]: E0123 14:38:07.719357 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:38:20 crc kubenswrapper[4775]: I0123 14:38:20.714503 4775 scope.go:117] "RemoveContainer" containerID="c96962219cc02cf6545d8daad2e49166d04ca29a855c1f10fa34771111704ad2" Jan 23 14:38:20 crc kubenswrapper[4775]: E0123 14:38:20.715485 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:38:35 crc kubenswrapper[4775]: I0123 14:38:35.716082 4775 scope.go:117] "RemoveContainer" containerID="c96962219cc02cf6545d8daad2e49166d04ca29a855c1f10fa34771111704ad2" Jan 23 14:38:36 crc kubenswrapper[4775]: I0123 14:38:36.365938 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" event={"ID":"9e8f7bb4-6671-4ef8-b35a-45059af73b01","Type":"ContainerStarted","Data":"459bbdd79b9ef93b768ffe9e959701153c794e89353802a93dd1cf650e3593cd"} Jan 23 14:38:40 crc kubenswrapper[4775]: I0123 14:38:40.417729 4775 generic.go:334] "Generic (PLEG): container finished" podID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerID="459bbdd79b9ef93b768ffe9e959701153c794e89353802a93dd1cf650e3593cd" exitCode=2 Jan 23 14:38:40 crc kubenswrapper[4775]: I0123 14:38:40.417873 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" event={"ID":"9e8f7bb4-6671-4ef8-b35a-45059af73b01","Type":"ContainerDied","Data":"459bbdd79b9ef93b768ffe9e959701153c794e89353802a93dd1cf650e3593cd"} Jan 23 14:38:40 crc kubenswrapper[4775]: I0123 14:38:40.418206 4775 scope.go:117] "RemoveContainer" containerID="c96962219cc02cf6545d8daad2e49166d04ca29a855c1f10fa34771111704ad2" Jan 23 14:38:40 crc kubenswrapper[4775]: I0123 14:38:40.419251 4775 scope.go:117] "RemoveContainer" containerID="459bbdd79b9ef93b768ffe9e959701153c794e89353802a93dd1cf650e3593cd" Jan 23 14:38:40 crc kubenswrapper[4775]: E0123 14:38:40.419646 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:38:48 crc kubenswrapper[4775]: I0123 14:38:48.282242 4775 scope.go:117] "RemoveContainer" containerID="af2e3d2fa526f083ebc61856e091755e854affc68850f0ccf9dc55db4575410a" Jan 23 14:38:51 crc kubenswrapper[4775]: I0123 14:38:51.099308 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2jvq2"] Jan 23 14:38:51 crc kubenswrapper[4775]: I0123 14:38:51.101590 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2jvq2" Jan 23 14:38:51 crc kubenswrapper[4775]: I0123 14:38:51.124827 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2jvq2"] Jan 23 14:38:51 crc kubenswrapper[4775]: I0123 14:38:51.176698 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxqvc\" (UniqueName: \"kubernetes.io/projected/4e9a2482-2cdd-40c0-b4f3-3caeadef05dd-kube-api-access-kxqvc\") pod \"redhat-operators-2jvq2\" (UID: \"4e9a2482-2cdd-40c0-b4f3-3caeadef05dd\") " pod="openshift-marketplace/redhat-operators-2jvq2" Jan 23 14:38:51 crc kubenswrapper[4775]: I0123 14:38:51.176866 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e9a2482-2cdd-40c0-b4f3-3caeadef05dd-catalog-content\") pod \"redhat-operators-2jvq2\" (UID: \"4e9a2482-2cdd-40c0-b4f3-3caeadef05dd\") " pod="openshift-marketplace/redhat-operators-2jvq2" Jan 23 14:38:51 crc kubenswrapper[4775]: I0123 14:38:51.176909 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e9a2482-2cdd-40c0-b4f3-3caeadef05dd-utilities\") pod \"redhat-operators-2jvq2\" (UID: \"4e9a2482-2cdd-40c0-b4f3-3caeadef05dd\") " pod="openshift-marketplace/redhat-operators-2jvq2" Jan 23 14:38:51 crc kubenswrapper[4775]: I0123 14:38:51.278864 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e9a2482-2cdd-40c0-b4f3-3caeadef05dd-catalog-content\") pod \"redhat-operators-2jvq2\" (UID: \"4e9a2482-2cdd-40c0-b4f3-3caeadef05dd\") " pod="openshift-marketplace/redhat-operators-2jvq2" Jan 23 14:38:51 crc kubenswrapper[4775]: I0123 14:38:51.278956 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e9a2482-2cdd-40c0-b4f3-3caeadef05dd-utilities\") pod \"redhat-operators-2jvq2\" (UID: \"4e9a2482-2cdd-40c0-b4f3-3caeadef05dd\") " pod="openshift-marketplace/redhat-operators-2jvq2" Jan 23 14:38:51 crc kubenswrapper[4775]: I0123 14:38:51.279041 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxqvc\" (UniqueName: \"kubernetes.io/projected/4e9a2482-2cdd-40c0-b4f3-3caeadef05dd-kube-api-access-kxqvc\") pod \"redhat-operators-2jvq2\" (UID: \"4e9a2482-2cdd-40c0-b4f3-3caeadef05dd\") " pod="openshift-marketplace/redhat-operators-2jvq2" Jan 23 14:38:51 crc kubenswrapper[4775]: I0123 14:38:51.279472 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e9a2482-2cdd-40c0-b4f3-3caeadef05dd-catalog-content\") pod \"redhat-operators-2jvq2\" (UID: \"4e9a2482-2cdd-40c0-b4f3-3caeadef05dd\") " pod="openshift-marketplace/redhat-operators-2jvq2" Jan 23 14:38:51 crc kubenswrapper[4775]: I0123 14:38:51.279523 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e9a2482-2cdd-40c0-b4f3-3caeadef05dd-utilities\") pod \"redhat-operators-2jvq2\" (UID: \"4e9a2482-2cdd-40c0-b4f3-3caeadef05dd\") " pod="openshift-marketplace/redhat-operators-2jvq2" Jan 23 14:38:51 crc kubenswrapper[4775]: I0123 14:38:51.307111 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxqvc\" (UniqueName: \"kubernetes.io/projected/4e9a2482-2cdd-40c0-b4f3-3caeadef05dd-kube-api-access-kxqvc\") pod \"redhat-operators-2jvq2\" (UID: \"4e9a2482-2cdd-40c0-b4f3-3caeadef05dd\") " pod="openshift-marketplace/redhat-operators-2jvq2" Jan 23 14:38:51 crc kubenswrapper[4775]: I0123 14:38:51.447484 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2jvq2" Jan 23 14:38:51 crc kubenswrapper[4775]: I0123 14:38:51.765631 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2jvq2"] Jan 23 14:38:51 crc kubenswrapper[4775]: W0123 14:38:51.769813 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e9a2482_2cdd_40c0_b4f3_3caeadef05dd.slice/crio-59d0b53a2d770041cd407cda07f9c2f93ff02324e246981fddc5e54130ac08a8 WatchSource:0}: Error finding container 59d0b53a2d770041cd407cda07f9c2f93ff02324e246981fddc5e54130ac08a8: Status 404 returned error can't find the container with id 59d0b53a2d770041cd407cda07f9c2f93ff02324e246981fddc5e54130ac08a8 Jan 23 14:38:52 crc kubenswrapper[4775]: I0123 14:38:52.527305 4775 generic.go:334] "Generic (PLEG): container finished" podID="4e9a2482-2cdd-40c0-b4f3-3caeadef05dd" containerID="15e0409151174fe824f88caad2aac9c730f6a5783cbbb8f5485e33d5ad371539" exitCode=0 Jan 23 14:38:52 crc kubenswrapper[4775]: I0123 14:38:52.527389 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2jvq2" event={"ID":"4e9a2482-2cdd-40c0-b4f3-3caeadef05dd","Type":"ContainerDied","Data":"15e0409151174fe824f88caad2aac9c730f6a5783cbbb8f5485e33d5ad371539"} Jan 23 14:38:52 crc kubenswrapper[4775]: I0123 14:38:52.527673 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2jvq2" event={"ID":"4e9a2482-2cdd-40c0-b4f3-3caeadef05dd","Type":"ContainerStarted","Data":"59d0b53a2d770041cd407cda07f9c2f93ff02324e246981fddc5e54130ac08a8"} Jan 23 14:38:52 crc kubenswrapper[4775]: I0123 14:38:52.529300 4775 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 14:38:53 crc kubenswrapper[4775]: I0123 14:38:53.543018 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2jvq2" event={"ID":"4e9a2482-2cdd-40c0-b4f3-3caeadef05dd","Type":"ContainerStarted","Data":"80a7e1b79e10ade5ca3411786ad426abb5b9818c2d785fcefea166ea61d61aad"} Jan 23 14:38:53 crc kubenswrapper[4775]: I0123 14:38:53.724239 4775 scope.go:117] "RemoveContainer" containerID="459bbdd79b9ef93b768ffe9e959701153c794e89353802a93dd1cf650e3593cd" Jan 23 14:38:53 crc kubenswrapper[4775]: E0123 14:38:53.724454 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:38:54 crc kubenswrapper[4775]: I0123 14:38:54.560255 4775 generic.go:334] "Generic (PLEG): container finished" podID="4e9a2482-2cdd-40c0-b4f3-3caeadef05dd" containerID="80a7e1b79e10ade5ca3411786ad426abb5b9818c2d785fcefea166ea61d61aad" exitCode=0 Jan 23 14:38:54 crc kubenswrapper[4775]: I0123 14:38:54.560501 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2jvq2" event={"ID":"4e9a2482-2cdd-40c0-b4f3-3caeadef05dd","Type":"ContainerDied","Data":"80a7e1b79e10ade5ca3411786ad426abb5b9818c2d785fcefea166ea61d61aad"} Jan 23 14:38:55 crc kubenswrapper[4775]: I0123 14:38:55.573289 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2jvq2" event={"ID":"4e9a2482-2cdd-40c0-b4f3-3caeadef05dd","Type":"ContainerStarted","Data":"055c8aa767c79af1212fd0914dc51175e88ab60bb51e3345738549f51d951f2b"} Jan 23 14:38:55 crc kubenswrapper[4775]: I0123 14:38:55.595339 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2jvq2" podStartSLOduration=2.141401147 podStartE2EDuration="4.595305123s" podCreationTimestamp="2026-01-23 14:38:51 +0000 UTC" firstStartedPulling="2026-01-23 14:38:52.529106514 +0000 UTC m=+2079.523935254" lastFinishedPulling="2026-01-23 14:38:54.98301045 +0000 UTC m=+2081.977839230" observedRunningTime="2026-01-23 14:38:55.591217973 +0000 UTC m=+2082.586046723" watchObservedRunningTime="2026-01-23 14:38:55.595305123 +0000 UTC m=+2082.590133903" Jan 23 14:39:01 crc kubenswrapper[4775]: I0123 14:39:01.448413 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2jvq2" Jan 23 14:39:01 crc kubenswrapper[4775]: I0123 14:39:01.448977 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2jvq2" Jan 23 14:39:02 crc kubenswrapper[4775]: I0123 14:39:02.495333 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2jvq2" podUID="4e9a2482-2cdd-40c0-b4f3-3caeadef05dd" containerName="registry-server" probeResult="failure" output=< Jan 23 14:39:02 crc kubenswrapper[4775]: timeout: failed to connect service ":50051" within 1s Jan 23 14:39:02 crc kubenswrapper[4775]: > Jan 23 14:39:04 crc kubenswrapper[4775]: I0123 14:39:04.714464 4775 scope.go:117] "RemoveContainer" containerID="459bbdd79b9ef93b768ffe9e959701153c794e89353802a93dd1cf650e3593cd" Jan 23 14:39:04 crc kubenswrapper[4775]: E0123 14:39:04.714963 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:39:11 crc kubenswrapper[4775]: I0123 14:39:11.517214 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2jvq2" Jan 23 14:39:11 crc kubenswrapper[4775]: I0123 14:39:11.577591 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2jvq2" Jan 23 14:39:11 crc kubenswrapper[4775]: I0123 14:39:11.784485 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2jvq2"] Jan 23 14:39:12 crc kubenswrapper[4775]: I0123 14:39:12.776462 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2jvq2" podUID="4e9a2482-2cdd-40c0-b4f3-3caeadef05dd" containerName="registry-server" containerID="cri-o://055c8aa767c79af1212fd0914dc51175e88ab60bb51e3345738549f51d951f2b" gracePeriod=2 Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.257797 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2jvq2" Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.303547 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e9a2482-2cdd-40c0-b4f3-3caeadef05dd-utilities\") pod \"4e9a2482-2cdd-40c0-b4f3-3caeadef05dd\" (UID: \"4e9a2482-2cdd-40c0-b4f3-3caeadef05dd\") " Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.303953 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxqvc\" (UniqueName: \"kubernetes.io/projected/4e9a2482-2cdd-40c0-b4f3-3caeadef05dd-kube-api-access-kxqvc\") pod \"4e9a2482-2cdd-40c0-b4f3-3caeadef05dd\" (UID: \"4e9a2482-2cdd-40c0-b4f3-3caeadef05dd\") " Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.304020 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e9a2482-2cdd-40c0-b4f3-3caeadef05dd-catalog-content\") pod \"4e9a2482-2cdd-40c0-b4f3-3caeadef05dd\" (UID: \"4e9a2482-2cdd-40c0-b4f3-3caeadef05dd\") " Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.325970 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e9a2482-2cdd-40c0-b4f3-3caeadef05dd-utilities" (OuterVolumeSpecName: "utilities") pod "4e9a2482-2cdd-40c0-b4f3-3caeadef05dd" (UID: "4e9a2482-2cdd-40c0-b4f3-3caeadef05dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.336108 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e9a2482-2cdd-40c0-b4f3-3caeadef05dd-kube-api-access-kxqvc" (OuterVolumeSpecName: "kube-api-access-kxqvc") pod "4e9a2482-2cdd-40c0-b4f3-3caeadef05dd" (UID: "4e9a2482-2cdd-40c0-b4f3-3caeadef05dd"). InnerVolumeSpecName "kube-api-access-kxqvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.405822 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxqvc\" (UniqueName: \"kubernetes.io/projected/4e9a2482-2cdd-40c0-b4f3-3caeadef05dd-kube-api-access-kxqvc\") on node \"crc\" DevicePath \"\"" Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.405870 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e9a2482-2cdd-40c0-b4f3-3caeadef05dd-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.460668 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e9a2482-2cdd-40c0-b4f3-3caeadef05dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e9a2482-2cdd-40c0-b4f3-3caeadef05dd" (UID: "4e9a2482-2cdd-40c0-b4f3-3caeadef05dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.507623 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e9a2482-2cdd-40c0-b4f3-3caeadef05dd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.789097 4775 generic.go:334] "Generic (PLEG): container finished" podID="4e9a2482-2cdd-40c0-b4f3-3caeadef05dd" containerID="055c8aa767c79af1212fd0914dc51175e88ab60bb51e3345738549f51d951f2b" exitCode=0 Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.789173 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2jvq2" Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.789237 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2jvq2" event={"ID":"4e9a2482-2cdd-40c0-b4f3-3caeadef05dd","Type":"ContainerDied","Data":"055c8aa767c79af1212fd0914dc51175e88ab60bb51e3345738549f51d951f2b"} Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.790346 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2jvq2" event={"ID":"4e9a2482-2cdd-40c0-b4f3-3caeadef05dd","Type":"ContainerDied","Data":"59d0b53a2d770041cd407cda07f9c2f93ff02324e246981fddc5e54130ac08a8"} Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.790372 4775 scope.go:117] "RemoveContainer" containerID="055c8aa767c79af1212fd0914dc51175e88ab60bb51e3345738549f51d951f2b" Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.821110 4775 scope.go:117] "RemoveContainer" containerID="80a7e1b79e10ade5ca3411786ad426abb5b9818c2d785fcefea166ea61d61aad" Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.822219 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2jvq2"] Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.829436 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2jvq2"] Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.843737 4775 scope.go:117] "RemoveContainer" containerID="15e0409151174fe824f88caad2aac9c730f6a5783cbbb8f5485e33d5ad371539" Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.883962 4775 scope.go:117] "RemoveContainer" containerID="055c8aa767c79af1212fd0914dc51175e88ab60bb51e3345738549f51d951f2b" Jan 23 14:39:13 crc kubenswrapper[4775]: E0123 14:39:13.884314 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"055c8aa767c79af1212fd0914dc51175e88ab60bb51e3345738549f51d951f2b\": container with ID starting with 055c8aa767c79af1212fd0914dc51175e88ab60bb51e3345738549f51d951f2b not found: ID does not exist" containerID="055c8aa767c79af1212fd0914dc51175e88ab60bb51e3345738549f51d951f2b" Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.884354 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"055c8aa767c79af1212fd0914dc51175e88ab60bb51e3345738549f51d951f2b"} err="failed to get container status \"055c8aa767c79af1212fd0914dc51175e88ab60bb51e3345738549f51d951f2b\": rpc error: code = NotFound desc = could not find container \"055c8aa767c79af1212fd0914dc51175e88ab60bb51e3345738549f51d951f2b\": container with ID starting with 055c8aa767c79af1212fd0914dc51175e88ab60bb51e3345738549f51d951f2b not found: ID does not exist" Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.884378 4775 scope.go:117] "RemoveContainer" containerID="80a7e1b79e10ade5ca3411786ad426abb5b9818c2d785fcefea166ea61d61aad" Jan 23 14:39:13 crc kubenswrapper[4775]: E0123 14:39:13.884626 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80a7e1b79e10ade5ca3411786ad426abb5b9818c2d785fcefea166ea61d61aad\": container with ID starting with 80a7e1b79e10ade5ca3411786ad426abb5b9818c2d785fcefea166ea61d61aad not found: ID does not exist" containerID="80a7e1b79e10ade5ca3411786ad426abb5b9818c2d785fcefea166ea61d61aad" Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.884654 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80a7e1b79e10ade5ca3411786ad426abb5b9818c2d785fcefea166ea61d61aad"} err="failed to get container status \"80a7e1b79e10ade5ca3411786ad426abb5b9818c2d785fcefea166ea61d61aad\": rpc error: code = NotFound desc = could not find container \"80a7e1b79e10ade5ca3411786ad426abb5b9818c2d785fcefea166ea61d61aad\": container with ID starting with 80a7e1b79e10ade5ca3411786ad426abb5b9818c2d785fcefea166ea61d61aad not found: ID does not exist" Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.884676 4775 scope.go:117] "RemoveContainer" containerID="15e0409151174fe824f88caad2aac9c730f6a5783cbbb8f5485e33d5ad371539" Jan 23 14:39:13 crc kubenswrapper[4775]: E0123 14:39:13.884956 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15e0409151174fe824f88caad2aac9c730f6a5783cbbb8f5485e33d5ad371539\": container with ID starting with 15e0409151174fe824f88caad2aac9c730f6a5783cbbb8f5485e33d5ad371539 not found: ID does not exist" containerID="15e0409151174fe824f88caad2aac9c730f6a5783cbbb8f5485e33d5ad371539" Jan 23 14:39:13 crc kubenswrapper[4775]: I0123 14:39:13.884979 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15e0409151174fe824f88caad2aac9c730f6a5783cbbb8f5485e33d5ad371539"} err="failed to get container status \"15e0409151174fe824f88caad2aac9c730f6a5783cbbb8f5485e33d5ad371539\": rpc error: code = NotFound desc = could not find container \"15e0409151174fe824f88caad2aac9c730f6a5783cbbb8f5485e33d5ad371539\": container with ID starting with 15e0409151174fe824f88caad2aac9c730f6a5783cbbb8f5485e33d5ad371539 not found: ID does not exist" Jan 23 14:39:15 crc kubenswrapper[4775]: I0123 14:39:15.730147 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e9a2482-2cdd-40c0-b4f3-3caeadef05dd" path="/var/lib/kubelet/pods/4e9a2482-2cdd-40c0-b4f3-3caeadef05dd/volumes" Jan 23 14:39:17 crc kubenswrapper[4775]: I0123 14:39:17.715295 4775 scope.go:117] "RemoveContainer" containerID="459bbdd79b9ef93b768ffe9e959701153c794e89353802a93dd1cf650e3593cd" Jan 23 14:39:17 crc kubenswrapper[4775]: E0123 14:39:17.716312 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:39:21 crc kubenswrapper[4775]: E0123 14:39:21.076549 4775 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e9a2482_2cdd_40c0_b4f3_3caeadef05dd.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e9a2482_2cdd_40c0_b4f3_3caeadef05dd.slice/crio-59d0b53a2d770041cd407cda07f9c2f93ff02324e246981fddc5e54130ac08a8\": RecentStats: unable to find data in memory cache]" Jan 23 14:39:30 crc kubenswrapper[4775]: I0123 14:39:30.714291 4775 scope.go:117] "RemoveContainer" containerID="459bbdd79b9ef93b768ffe9e959701153c794e89353802a93dd1cf650e3593cd" Jan 23 14:39:30 crc kubenswrapper[4775]: E0123 14:39:30.715374 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:39:31 crc kubenswrapper[4775]: E0123 14:39:31.294204 4775 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e9a2482_2cdd_40c0_b4f3_3caeadef05dd.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e9a2482_2cdd_40c0_b4f3_3caeadef05dd.slice/crio-59d0b53a2d770041cd407cda07f9c2f93ff02324e246981fddc5e54130ac08a8\": RecentStats: unable to find data in memory cache]" Jan 23 14:39:41 crc kubenswrapper[4775]: E0123 14:39:41.493218 4775 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e9a2482_2cdd_40c0_b4f3_3caeadef05dd.slice/crio-59d0b53a2d770041cd407cda07f9c2f93ff02324e246981fddc5e54130ac08a8\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e9a2482_2cdd_40c0_b4f3_3caeadef05dd.slice\": RecentStats: unable to find data in memory cache]" Jan 23 14:39:42 crc kubenswrapper[4775]: I0123 14:39:42.713979 4775 scope.go:117] "RemoveContainer" containerID="459bbdd79b9ef93b768ffe9e959701153c794e89353802a93dd1cf650e3593cd" Jan 23 14:39:42 crc kubenswrapper[4775]: E0123 14:39:42.714655 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:39:48 crc kubenswrapper[4775]: I0123 14:39:48.408847 4775 scope.go:117] "RemoveContainer" containerID="5660aa2517d0892f37febd6e7336a548ede2e720ab7264d812ad264a50eb46b2" Jan 23 14:39:48 crc kubenswrapper[4775]: I0123 14:39:48.446185 4775 scope.go:117] "RemoveContainer" containerID="8739f351b2bc9ad8d8fe3ea2133ea2116442a4d5b5cf5ef247dd695ec789dddf" Jan 23 14:39:48 crc kubenswrapper[4775]: I0123 14:39:48.498742 4775 scope.go:117] "RemoveContainer" containerID="61ab9533e70d4b69baa5f710542bcb0de5d0a3981f871d6eb9f7dfa31ff05f49" Jan 23 14:39:48 crc kubenswrapper[4775]: I0123 14:39:48.532377 4775 scope.go:117] "RemoveContainer" containerID="f1433b1b1039e1ad5b79126e2b4c0ca66e85ee090af1bd408ecba19e2c872f9a" Jan 23 14:39:48 crc kubenswrapper[4775]: I0123 14:39:48.569698 4775 scope.go:117] "RemoveContainer" containerID="edf9ee8a876623f0b7161ac8eb02db7ebf284b2ff4311bc67eb9dd19aea83eba" Jan 23 14:39:48 crc kubenswrapper[4775]: I0123 14:39:48.611673 4775 scope.go:117] "RemoveContainer" containerID="7a9edcf7a6eef68f25783c87ff91eb1a9a70ab35e82018e110b39960153337f3" Jan 23 14:39:48 crc kubenswrapper[4775]: I0123 14:39:48.645123 4775 scope.go:117] "RemoveContainer" containerID="d5a625216c448145f1513473de681abbe074c66d1f215fbd1239d870733f21c4" Jan 23 14:39:48 crc kubenswrapper[4775]: I0123 14:39:48.671412 4775 scope.go:117] "RemoveContainer" containerID="f00011167bc09af603822453b51182838d413ff1ad414892e875b504e0751ab6" Jan 23 14:39:48 crc kubenswrapper[4775]: I0123 14:39:48.692580 4775 scope.go:117] "RemoveContainer" containerID="de44f8ed18b4260ec3e0e35481cd929500e4cac5322c792037bcf7ae3fda7a94" Jan 23 14:39:51 crc kubenswrapper[4775]: E0123 14:39:51.760655 4775 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e9a2482_2cdd_40c0_b4f3_3caeadef05dd.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e9a2482_2cdd_40c0_b4f3_3caeadef05dd.slice/crio-59d0b53a2d770041cd407cda07f9c2f93ff02324e246981fddc5e54130ac08a8\": RecentStats: unable to find data in memory cache]" Jan 23 14:39:53 crc kubenswrapper[4775]: I0123 14:39:53.218590 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:39:53 crc kubenswrapper[4775]: I0123 14:39:53.219022 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:39:53 crc kubenswrapper[4775]: I0123 14:39:53.722447 4775 scope.go:117] "RemoveContainer" containerID="459bbdd79b9ef93b768ffe9e959701153c794e89353802a93dd1cf650e3593cd" Jan 23 14:39:53 crc kubenswrapper[4775]: E0123 14:39:53.722856 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:40:01 crc kubenswrapper[4775]: E0123 14:40:01.948436 4775 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e9a2482_2cdd_40c0_b4f3_3caeadef05dd.slice/crio-59d0b53a2d770041cd407cda07f9c2f93ff02324e246981fddc5e54130ac08a8\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e9a2482_2cdd_40c0_b4f3_3caeadef05dd.slice\": RecentStats: unable to find data in memory cache]" Jan 23 14:40:04 crc kubenswrapper[4775]: I0123 14:40:04.714480 4775 scope.go:117] "RemoveContainer" containerID="459bbdd79b9ef93b768ffe9e959701153c794e89353802a93dd1cf650e3593cd" Jan 23 14:40:05 crc kubenswrapper[4775]: I0123 14:40:05.350871 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" event={"ID":"9e8f7bb4-6671-4ef8-b35a-45059af73b01","Type":"ContainerStarted","Data":"4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5"} Jan 23 14:40:09 crc kubenswrapper[4775]: I0123 14:40:09.394830 4775 generic.go:334] "Generic (PLEG): container finished" podID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerID="4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5" exitCode=2 Jan 23 14:40:09 crc kubenswrapper[4775]: I0123 14:40:09.394902 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" event={"ID":"9e8f7bb4-6671-4ef8-b35a-45059af73b01","Type":"ContainerDied","Data":"4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5"} Jan 23 14:40:09 crc kubenswrapper[4775]: I0123 14:40:09.395245 4775 scope.go:117] "RemoveContainer" containerID="459bbdd79b9ef93b768ffe9e959701153c794e89353802a93dd1cf650e3593cd" Jan 23 14:40:09 crc kubenswrapper[4775]: I0123 14:40:09.395650 4775 scope.go:117] "RemoveContainer" containerID="4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5" Jan 23 14:40:09 crc kubenswrapper[4775]: E0123 14:40:09.395891 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:40:12 crc kubenswrapper[4775]: E0123 14:40:12.224278 4775 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e9a2482_2cdd_40c0_b4f3_3caeadef05dd.slice/crio-59d0b53a2d770041cd407cda07f9c2f93ff02324e246981fddc5e54130ac08a8\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e9a2482_2cdd_40c0_b4f3_3caeadef05dd.slice\": RecentStats: unable to find data in memory cache]" Jan 23 14:40:23 crc kubenswrapper[4775]: I0123 14:40:23.219415 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:40:23 crc kubenswrapper[4775]: I0123 14:40:23.220247 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:40:23 crc kubenswrapper[4775]: I0123 14:40:23.725242 4775 scope.go:117] "RemoveContainer" containerID="4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5" Jan 23 14:40:23 crc kubenswrapper[4775]: E0123 14:40:23.725603 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:40:34 crc kubenswrapper[4775]: I0123 14:40:34.714154 4775 scope.go:117] "RemoveContainer" containerID="4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5" Jan 23 14:40:34 crc kubenswrapper[4775]: E0123 14:40:34.714923 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:40:45 crc kubenswrapper[4775]: I0123 14:40:45.714285 4775 scope.go:117] "RemoveContainer" containerID="4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5" Jan 23 14:40:45 crc kubenswrapper[4775]: E0123 14:40:45.715002 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:40:48 crc kubenswrapper[4775]: I0123 14:40:48.903117 4775 scope.go:117] "RemoveContainer" containerID="ed23d1d8c2e578153c70d817dfeffe62e4af30e952a97680b7c773eb23fb2ca1" Jan 23 14:40:48 crc kubenswrapper[4775]: I0123 14:40:48.934512 4775 scope.go:117] "RemoveContainer" containerID="2a4347263630b9bca7d3c8fbb1ac8953b6f41d8acd21d8aebe8a8fad3474db05" Jan 23 14:40:48 crc kubenswrapper[4775]: I0123 14:40:48.970541 4775 scope.go:117] "RemoveContainer" containerID="8338a669e0d43937d5f843231e5fbbed5ec502884f9ba96c38e08d3114af925f" Jan 23 14:40:49 crc kubenswrapper[4775]: I0123 14:40:49.018016 4775 scope.go:117] "RemoveContainer" containerID="3951b61bf0f5fd68e8a231037d3c4c31e8105e9a338b029e1bef1e8babd9023f" Jan 23 14:40:49 crc kubenswrapper[4775]: I0123 14:40:49.068523 4775 scope.go:117] "RemoveContainer" containerID="6d9268bfe9748ec6624655bc60aabe83c7ae7e713292756baef52641a7e4c393" Jan 23 14:40:53 crc kubenswrapper[4775]: I0123 14:40:53.218972 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:40:53 crc kubenswrapper[4775]: I0123 14:40:53.219431 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:40:53 crc kubenswrapper[4775]: I0123 14:40:53.219494 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:40:53 crc kubenswrapper[4775]: I0123 14:40:53.220500 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d"} pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 14:40:53 crc kubenswrapper[4775]: I0123 14:40:53.220603 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" containerID="cri-o://607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" gracePeriod=600 Jan 23 14:40:53 crc kubenswrapper[4775]: E0123 14:40:53.275044 4775 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fea0767_0566_4214_855d_ed0373946271.slice/crio-conmon-607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d.scope\": RecentStats: unable to find data in memory cache]" Jan 23 14:40:53 crc kubenswrapper[4775]: E0123 14:40:53.343349 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:40:53 crc kubenswrapper[4775]: I0123 14:40:53.854951 4775 generic.go:334] "Generic (PLEG): container finished" podID="4fea0767-0566-4214-855d-ed0373946271" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" exitCode=0 Jan 23 14:40:53 crc kubenswrapper[4775]: I0123 14:40:53.855026 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerDied","Data":"607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d"} Jan 23 14:40:53 crc kubenswrapper[4775]: I0123 14:40:53.855117 4775 scope.go:117] "RemoveContainer" containerID="d3d96378db42c2ddc5100447e504efd5667272c1b57105f220bac9f07cfe29ce" Jan 23 14:40:53 crc kubenswrapper[4775]: I0123 14:40:53.855996 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:40:53 crc kubenswrapper[4775]: E0123 14:40:53.856409 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:40:57 crc kubenswrapper[4775]: I0123 14:40:57.713650 4775 scope.go:117] "RemoveContainer" containerID="4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5" Jan 23 14:40:57 crc kubenswrapper[4775]: E0123 14:40:57.714410 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:41:04 crc kubenswrapper[4775]: I0123 14:41:04.713966 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:41:04 crc kubenswrapper[4775]: E0123 14:41:04.715112 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:41:10 crc kubenswrapper[4775]: I0123 14:41:10.713627 4775 scope.go:117] "RemoveContainer" containerID="4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5" Jan 23 14:41:10 crc kubenswrapper[4775]: E0123 14:41:10.714691 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:41:15 crc kubenswrapper[4775]: I0123 14:41:15.714574 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:41:15 crc kubenswrapper[4775]: E0123 14:41:15.715879 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:41:25 crc kubenswrapper[4775]: I0123 14:41:25.714636 4775 scope.go:117] "RemoveContainer" containerID="4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5" Jan 23 14:41:25 crc kubenswrapper[4775]: E0123 14:41:25.716133 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:41:27 crc kubenswrapper[4775]: I0123 14:41:27.714496 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:41:27 crc kubenswrapper[4775]: E0123 14:41:27.715010 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:41:37 crc kubenswrapper[4775]: I0123 14:41:37.714723 4775 scope.go:117] "RemoveContainer" containerID="4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5" Jan 23 14:41:37 crc kubenswrapper[4775]: E0123 14:41:37.715682 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:41:40 crc kubenswrapper[4775]: I0123 14:41:40.715058 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:41:40 crc kubenswrapper[4775]: E0123 14:41:40.715773 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:41:49 crc kubenswrapper[4775]: I0123 14:41:49.219409 4775 scope.go:117] "RemoveContainer" containerID="dfda1a9e78a513115b2113a2fcaec48ff69d5be5bceff17b19195b09fc695118" Jan 23 14:41:49 crc kubenswrapper[4775]: I0123 14:41:49.260736 4775 scope.go:117] "RemoveContainer" containerID="c66c6806d40d02d59cb9c150734f4cbd3c4f3513f91224480738c9614deade7b" Jan 23 14:41:49 crc kubenswrapper[4775]: I0123 14:41:49.311958 4775 scope.go:117] "RemoveContainer" containerID="5adc38c96008a8a594360e5e6bb09c834348a926f5530d7c364ad7b4ca6f9d2b" Jan 23 14:41:50 crc kubenswrapper[4775]: I0123 14:41:50.714225 4775 scope.go:117] "RemoveContainer" containerID="4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5" Jan 23 14:41:50 crc kubenswrapper[4775]: E0123 14:41:50.715035 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:41:53 crc kubenswrapper[4775]: I0123 14:41:53.723510 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:41:53 crc kubenswrapper[4775]: E0123 14:41:53.724303 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:42:04 crc kubenswrapper[4775]: I0123 14:42:04.714948 4775 scope.go:117] "RemoveContainer" containerID="4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5" Jan 23 14:42:04 crc kubenswrapper[4775]: E0123 14:42:04.717712 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:42:04 crc kubenswrapper[4775]: I0123 14:42:04.921989 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-7d978f-gdlmv_898c8554-82c6-4777-8869-15981e356a84/keystone-api/0.log" Jan 23 14:42:07 crc kubenswrapper[4775]: I0123 14:42:07.713919 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:42:07 crc kubenswrapper[4775]: E0123 14:42:07.715254 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:42:08 crc kubenswrapper[4775]: I0123 14:42:08.742444 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_memcached-0_2e1f7aa1-1780-4ccb-b1a5-66b9b279d555/memcached/0.log" Jan 23 14:42:09 crc kubenswrapper[4775]: I0123 14:42:09.308156 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-api-31e4-account-create-update-2rd2s_95df8848-8035-4302-9689-db060f7d4148/mariadb-account-create-update/0.log" Jan 23 14:42:09 crc kubenswrapper[4775]: I0123 14:42:09.852465 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-api-db-create-bfq79_98b564d3-5399-47b6-9397-4c3b006f9e13/mariadb-database-create/0.log" Jan 23 14:42:10 crc kubenswrapper[4775]: I0123 14:42:10.419153 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-cell0-db-create-d8kgs_603674a6-1055-4e27-b370-2b57865ebc55/mariadb-database-create/0.log" Jan 23 14:42:10 crc kubenswrapper[4775]: I0123 14:42:10.919493 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-cell0-f1e1-account-create-update-8ng7h_48eb2aff-1769-415f-b284-8d0cbf32a4e9/mariadb-account-create-update/0.log" Jan 23 14:42:11 crc kubenswrapper[4775]: I0123 14:42:11.409598 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-cell1-574a-account-create-update-mjhg8_15c2fb30-3be5-4e47-b2d3-8fbd54665494/mariadb-account-create-update/0.log" Jan 23 14:42:11 crc kubenswrapper[4775]: I0123 14:42:11.934411 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-cell1-db-create-82jzj_891c1a15-7b44-4c8f-be11-d06333a1d0d1/mariadb-database-create/0.log" Jan 23 14:42:12 crc kubenswrapper[4775]: I0123 14:42:12.561415 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-api-0_56066bf2-4408-46e5-8df0-6ce62447bf2a/nova-kuttl-api-log/0.log" Jan 23 14:42:13 crc kubenswrapper[4775]: I0123 14:42:13.070275 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell0-cell-mapping-qxjlc_a194a858-8c18-41e1-9a10-428397753ece/nova-manage/0.log" Jan 23 14:42:13 crc kubenswrapper[4775]: I0123 14:42:13.611496 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell0-conductor-0_fab3b1c6-093c-4891-957c-fad86eb8fd31/nova-kuttl-cell0-conductor-conductor/0.log" Jan 23 14:42:14 crc kubenswrapper[4775]: I0123 14:42:14.077050 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell0-conductor-db-sync-2l6n8_12f70e17-ec31-43fc-ac56-d1742f962de5/nova-kuttl-cell0-conductor-db-sync/0.log" Jan 23 14:42:14 crc kubenswrapper[4775]: I0123 14:42:14.628492 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-cell-delete-w7tbz_9e8f7bb4-6671-4ef8-b35a-45059af73b01/nova-manage/5.log" Jan 23 14:42:15 crc kubenswrapper[4775]: I0123 14:42:15.173831 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-cell-mapping-4gfb8_3ef19dc5-1d78-479c-8220-340c46c44bdf/nova-manage/0.log" Jan 23 14:42:15 crc kubenswrapper[4775]: I0123 14:42:15.714543 4775 scope.go:117] "RemoveContainer" containerID="4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5" Jan 23 14:42:15 crc kubenswrapper[4775]: E0123 14:42:15.714929 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:42:15 crc kubenswrapper[4775]: I0123 14:42:15.795390 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-conductor-0_1fd448a3-6897-490f-9c92-98590cee53ca/nova-kuttl-cell1-conductor-conductor/0.log" Jan 23 14:42:16 crc kubenswrapper[4775]: I0123 14:42:16.375332 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-conductor-db-sync-sjz5r_263d2fcc-c533-4291-8e78-d8e9a2ee2894/nova-kuttl-cell1-conductor-db-sync/0.log" Jan 23 14:42:16 crc kubenswrapper[4775]: I0123 14:42:16.910624 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-novncproxy-0_cb15b357-f464-4e43-a038-3b9e72455d49/nova-kuttl-cell1-novncproxy-novncproxy/0.log" Jan 23 14:42:17 crc kubenswrapper[4775]: I0123 14:42:17.527258 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-metadata-0_72d0a843-11de-43a6-9c92-6a65a6d406ec/nova-kuttl-metadata-log/0.log" Jan 23 14:42:18 crc kubenswrapper[4775]: I0123 14:42:18.137253 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-scheduler-0_bdfa6b38-3f0a-4f8e-9bd4-ec3907a919f0/nova-kuttl-scheduler-scheduler/0.log" Jan 23 14:42:18 crc kubenswrapper[4775]: I0123 14:42:18.682082 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-cell1-galera-0_481cbe1b-2796-4ad2-a342-3661afa62383/galera/0.log" Jan 23 14:42:19 crc kubenswrapper[4775]: I0123 14:42:19.297382 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-galera-0_372c512d-5894-49da-ae1e-cb3e54aadacc/galera/0.log" Jan 23 14:42:19 crc kubenswrapper[4775]: I0123 14:42:19.882174 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstackclient_76733f2d-491c-45dd-bcf5-1a4423019717/openstackclient/0.log" Jan 23 14:42:20 crc kubenswrapper[4775]: I0123 14:42:20.523307 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-7787b67bb8-psq7t_6b653824-2e32-431a-8b16-f8687610c0fe/placement-log/0.log" Jan 23 14:42:20 crc kubenswrapper[4775]: I0123 14:42:20.714272 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:42:20 crc kubenswrapper[4775]: E0123 14:42:20.714662 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:42:21 crc kubenswrapper[4775]: I0123 14:42:21.025010 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-broadcaster-server-0_401a94b6-0628-4cea-b62a-c3229a913d16/rabbitmq/0.log" Jan 23 14:42:21 crc kubenswrapper[4775]: I0123 14:42:21.575746 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-cell1-server-0_4b05c189-a694-4cbc-b679-a974e6bf99bc/rabbitmq/0.log" Jan 23 14:42:22 crc kubenswrapper[4775]: I0123 14:42:22.183941 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-server-0_70288c27-7f95-4843-a8fb-f2ac58ea8e1f/rabbitmq/0.log" Jan 23 14:42:28 crc kubenswrapper[4775]: I0123 14:42:28.713779 4775 scope.go:117] "RemoveContainer" containerID="4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5" Jan 23 14:42:28 crc kubenswrapper[4775]: E0123 14:42:28.714970 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:42:35 crc kubenswrapper[4775]: I0123 14:42:35.714314 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:42:35 crc kubenswrapper[4775]: E0123 14:42:35.715385 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:42:42 crc kubenswrapper[4775]: I0123 14:42:42.715112 4775 scope.go:117] "RemoveContainer" containerID="4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5" Jan 23 14:42:42 crc kubenswrapper[4775]: E0123 14:42:42.716079 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-w7tbz_nova-kuttl-default(9e8f7bb4-6671-4ef8-b35a-45059af73b01)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" Jan 23 14:42:48 crc kubenswrapper[4775]: I0123 14:42:48.714050 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:42:48 crc kubenswrapper[4775]: E0123 14:42:48.714792 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:42:55 crc kubenswrapper[4775]: I0123 14:42:55.714504 4775 scope.go:117] "RemoveContainer" containerID="4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5" Jan 23 14:42:56 crc kubenswrapper[4775]: I0123 14:42:56.164147 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" event={"ID":"9e8f7bb4-6671-4ef8-b35a-45059af73b01","Type":"ContainerStarted","Data":"bbfdd03b35aa0f43eb005676d0bb094e23186b219df60ec6cc05fba81339a83e"} Jan 23 14:42:57 crc kubenswrapper[4775]: I0123 14:42:57.212275 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz"] Jan 23 14:42:57 crc kubenswrapper[4775]: I0123 14:42:57.213230 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerName="nova-manage" containerID="cri-o://bbfdd03b35aa0f43eb005676d0bb094e23186b219df60ec6cc05fba81339a83e" gracePeriod=30 Jan 23 14:42:58 crc kubenswrapper[4775]: I0123 14:42:58.379436 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt_100f3a0b-4d11-495f-a6fe-57b196820ee3/extract/0.log" Jan 23 14:42:58 crc kubenswrapper[4775]: I0123 14:42:58.917197 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc_a7025f67-434a-4dba-9b3a-e3b809f5c614/extract/0.log" Jan 23 14:42:59 crc kubenswrapper[4775]: I0123 14:42:59.440024 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-pk9jd_56ee00d0-c0f0-442a-bf4a-7335b62c1c4e/manager/0.log" Jan 23 14:42:59 crc kubenswrapper[4775]: I0123 14:42:59.928527 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-dz7ft_9ce79c2a-2c52-48de-80a6-887d592578d3/manager/0.log" Jan 23 14:43:00 crc kubenswrapper[4775]: I0123 14:43:00.422128 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-ppxmc_352223d5-fa0a-43df-8bad-0eaa9b6b439d/manager/0.log" Jan 23 14:43:00 crc kubenswrapper[4775]: I0123 14:43:00.845166 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" Jan 23 14:43:00 crc kubenswrapper[4775]: I0123 14:43:00.904284 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-jq89z_64bae0eb-d703-4058-a545-b42d62045b90/manager/0.log" Jan 23 14:43:00 crc kubenswrapper[4775]: I0123 14:43:00.981553 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e8f7bb4-6671-4ef8-b35a-45059af73b01-scripts\") pod \"9e8f7bb4-6671-4ef8-b35a-45059af73b01\" (UID: \"9e8f7bb4-6671-4ef8-b35a-45059af73b01\") " Jan 23 14:43:00 crc kubenswrapper[4775]: I0123 14:43:00.981859 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e8f7bb4-6671-4ef8-b35a-45059af73b01-config-data\") pod \"9e8f7bb4-6671-4ef8-b35a-45059af73b01\" (UID: \"9e8f7bb4-6671-4ef8-b35a-45059af73b01\") " Jan 23 14:43:00 crc kubenswrapper[4775]: I0123 14:43:00.982795 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rn495\" (UniqueName: \"kubernetes.io/projected/9e8f7bb4-6671-4ef8-b35a-45059af73b01-kube-api-access-rn495\") pod \"9e8f7bb4-6671-4ef8-b35a-45059af73b01\" (UID: \"9e8f7bb4-6671-4ef8-b35a-45059af73b01\") " Jan 23 14:43:00 crc kubenswrapper[4775]: I0123 14:43:00.988538 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e8f7bb4-6671-4ef8-b35a-45059af73b01-kube-api-access-rn495" (OuterVolumeSpecName: "kube-api-access-rn495") pod "9e8f7bb4-6671-4ef8-b35a-45059af73b01" (UID: "9e8f7bb4-6671-4ef8-b35a-45059af73b01"). InnerVolumeSpecName "kube-api-access-rn495". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:43:00 crc kubenswrapper[4775]: I0123 14:43:00.990024 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e8f7bb4-6671-4ef8-b35a-45059af73b01-scripts" (OuterVolumeSpecName: "scripts") pod "9e8f7bb4-6671-4ef8-b35a-45059af73b01" (UID: "9e8f7bb4-6671-4ef8-b35a-45059af73b01"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:43:01 crc kubenswrapper[4775]: I0123 14:43:01.012749 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e8f7bb4-6671-4ef8-b35a-45059af73b01-config-data" (OuterVolumeSpecName: "config-data") pod "9e8f7bb4-6671-4ef8-b35a-45059af73b01" (UID: "9e8f7bb4-6671-4ef8-b35a-45059af73b01"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:43:01 crc kubenswrapper[4775]: I0123 14:43:01.085349 4775 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e8f7bb4-6671-4ef8-b35a-45059af73b01-scripts\") on node \"crc\" DevicePath \"\"" Jan 23 14:43:01 crc kubenswrapper[4775]: I0123 14:43:01.085387 4775 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e8f7bb4-6671-4ef8-b35a-45059af73b01-config-data\") on node \"crc\" DevicePath \"\"" Jan 23 14:43:01 crc kubenswrapper[4775]: I0123 14:43:01.085409 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rn495\" (UniqueName: \"kubernetes.io/projected/9e8f7bb4-6671-4ef8-b35a-45059af73b01-kube-api-access-rn495\") on node \"crc\" DevicePath \"\"" Jan 23 14:43:01 crc kubenswrapper[4775]: I0123 14:43:01.221591 4775 generic.go:334] "Generic (PLEG): container finished" podID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerID="bbfdd03b35aa0f43eb005676d0bb094e23186b219df60ec6cc05fba81339a83e" exitCode=2 Jan 23 14:43:01 crc kubenswrapper[4775]: I0123 14:43:01.221647 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" event={"ID":"9e8f7bb4-6671-4ef8-b35a-45059af73b01","Type":"ContainerDied","Data":"bbfdd03b35aa0f43eb005676d0bb094e23186b219df60ec6cc05fba81339a83e"} Jan 23 14:43:01 crc kubenswrapper[4775]: I0123 14:43:01.221685 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" event={"ID":"9e8f7bb4-6671-4ef8-b35a-45059af73b01","Type":"ContainerDied","Data":"27562d541f20254a2f84db2c1a11a1410fb6f2f590a4c41036a86757dd88cf6b"} Jan 23 14:43:01 crc kubenswrapper[4775]: I0123 14:43:01.221713 4775 scope.go:117] "RemoveContainer" containerID="bbfdd03b35aa0f43eb005676d0bb094e23186b219df60ec6cc05fba81339a83e" Jan 23 14:43:01 crc kubenswrapper[4775]: I0123 14:43:01.221719 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz" Jan 23 14:43:01 crc kubenswrapper[4775]: I0123 14:43:01.256296 4775 scope.go:117] "RemoveContainer" containerID="4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5" Jan 23 14:43:01 crc kubenswrapper[4775]: I0123 14:43:01.286837 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz"] Jan 23 14:43:01 crc kubenswrapper[4775]: I0123 14:43:01.293701 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-delete-w7tbz"] Jan 23 14:43:01 crc kubenswrapper[4775]: I0123 14:43:01.335581 4775 scope.go:117] "RemoveContainer" containerID="bbfdd03b35aa0f43eb005676d0bb094e23186b219df60ec6cc05fba81339a83e" Jan 23 14:43:01 crc kubenswrapper[4775]: E0123 14:43:01.337191 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbfdd03b35aa0f43eb005676d0bb094e23186b219df60ec6cc05fba81339a83e\": container with ID starting with bbfdd03b35aa0f43eb005676d0bb094e23186b219df60ec6cc05fba81339a83e not found: ID does not exist" containerID="bbfdd03b35aa0f43eb005676d0bb094e23186b219df60ec6cc05fba81339a83e" Jan 23 14:43:01 crc kubenswrapper[4775]: I0123 14:43:01.337231 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbfdd03b35aa0f43eb005676d0bb094e23186b219df60ec6cc05fba81339a83e"} err="failed to get container status \"bbfdd03b35aa0f43eb005676d0bb094e23186b219df60ec6cc05fba81339a83e\": rpc error: code = NotFound desc = could not find container \"bbfdd03b35aa0f43eb005676d0bb094e23186b219df60ec6cc05fba81339a83e\": container with ID starting with bbfdd03b35aa0f43eb005676d0bb094e23186b219df60ec6cc05fba81339a83e not found: ID does not exist" Jan 23 14:43:01 crc kubenswrapper[4775]: I0123 14:43:01.337254 4775 scope.go:117] "RemoveContainer" containerID="4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5" Jan 23 14:43:01 crc kubenswrapper[4775]: E0123 14:43:01.339111 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5\": container with ID starting with 4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5 not found: ID does not exist" containerID="4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5" Jan 23 14:43:01 crc kubenswrapper[4775]: I0123 14:43:01.339139 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5"} err="failed to get container status \"4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5\": rpc error: code = NotFound desc = could not find container \"4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5\": container with ID starting with 4b2c0b2a49812dd0dc739e1ffa38f5215b50c75aeeb9373e46d226635c8575d5 not found: ID does not exist" Jan 23 14:43:01 crc kubenswrapper[4775]: I0123 14:43:01.435244 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-xrmvt_841fb528-61a8-445e-a135-be26295bc975/manager/0.log" Jan 23 14:43:01 crc kubenswrapper[4775]: I0123 14:43:01.729929 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" path="/var/lib/kubelet/pods/9e8f7bb4-6671-4ef8-b35a-45059af73b01/volumes" Jan 23 14:43:01 crc kubenswrapper[4775]: I0123 14:43:01.950939 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-sg9x5_d9e69fcf-58c9-45fe-a291-4628c8219e10/manager/0.log" Jan 23 14:43:02 crc kubenswrapper[4775]: I0123 14:43:02.662620 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-58749ffdfb-mcrj4_5a65a9ef-28c7-46ae-826d-5546af1103a5/manager/0.log" Jan 23 14:43:02 crc kubenswrapper[4775]: I0123 14:43:02.714494 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:43:02 crc kubenswrapper[4775]: E0123 14:43:02.717774 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:43:03 crc kubenswrapper[4775]: I0123 14:43:03.172729 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-f7lm6_d98bebb2-a42a-45a6-b452-a82ce1f62896/manager/0.log" Jan 23 14:43:03 crc kubenswrapper[4775]: I0123 14:43:03.756113 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-bgbpj_0784c928-e0c5-4afb-99cb-4f1f96820a14/manager/0.log" Jan 23 14:43:04 crc kubenswrapper[4775]: I0123 14:43:04.216868 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-pfdc5_853c6152-25bf-4374-a941-f9cd4202c87f/manager/0.log" Jan 23 14:43:04 crc kubenswrapper[4775]: I0123 14:43:04.770079 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-jk8vg_bb6ce8ae-8d3f-4988-9386-6a20487f8ae9/manager/0.log" Jan 23 14:43:05 crc kubenswrapper[4775]: I0123 14:43:05.252583 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-sxkzh_9710b785-e422-4aca-88e8-e88d26d4e724/manager/0.log" Jan 23 14:43:06 crc kubenswrapper[4775]: I0123 14:43:06.267336 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7c5fcc4cc6-wwr78_92377252-2e4d-48bb-95ea-724a4ff5c788/manager/0.log" Jan 23 14:43:06 crc kubenswrapper[4775]: I0123 14:43:06.745787 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-index-x4gqk_78f375c8-5d62-4cbb-b348-8205d476d603/registry-server/0.log" Jan 23 14:43:07 crc kubenswrapper[4775]: I0123 14:43:07.260092 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-vl7m5_a07598ff-60cc-482e-a551-af751575709c/manager/0.log" Jan 23 14:43:07 crc kubenswrapper[4775]: I0123 14:43:07.797715 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854zk48c_44a963d8-d403-42d5-acd2-a0379f07db51/manager/0.log" Jan 23 14:43:08 crc kubenswrapper[4775]: I0123 14:43:08.714586 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-bb8f85db-bkqk9_313b5382-60cf-4627-8ba7-a091fc457989/manager/0.log" Jan 23 14:43:09 crc kubenswrapper[4775]: I0123 14:43:09.213393 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-5czdz_a0ddc210-ca29-42e4-a4c2-a07881434fed/registry-server/0.log" Jan 23 14:43:09 crc kubenswrapper[4775]: I0123 14:43:09.745784 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-xst4r_3d7c7bc6-5124-4cd4-a406-448ca94ba640/manager/0.log" Jan 23 14:43:10 crc kubenswrapper[4775]: I0123 14:43:10.283109 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-n4k5s_072b9a9d-8a08-454c-b1b6-628fcdcc91df/manager/0.log" Jan 23 14:43:10 crc kubenswrapper[4775]: I0123 14:43:10.817946 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-2lhsf_f9da51f1-a035-44b8-9391-0d6018a84c61/operator/0.log" Jan 23 14:43:11 crc kubenswrapper[4775]: I0123 14:43:11.321688 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-nqw74_ecef6080-ea2c-43f4-8ffa-da2ceb59369d/manager/0.log" Jan 23 14:43:11 crc kubenswrapper[4775]: I0123 14:43:11.841705 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-jrhlh_91da96b4-921a-4b88-9804-55745989e08b/manager/0.log" Jan 23 14:43:12 crc kubenswrapper[4775]: I0123 14:43:12.360248 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-xtmz8_9f9597bf-12a1-4204-ac57-37c4c0189687/manager/0.log" Jan 23 14:43:12 crc kubenswrapper[4775]: I0123 14:43:12.805376 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6d9458688d-v8dw9_272dcd84-1bb6-42cb-8c8e-6851f9f031de/manager/0.log" Jan 23 14:43:14 crc kubenswrapper[4775]: I0123 14:43:14.714567 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:43:14 crc kubenswrapper[4775]: E0123 14:43:14.715046 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:43:18 crc kubenswrapper[4775]: I0123 14:43:18.014784 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-7d978f-gdlmv_898c8554-82c6-4777-8869-15981e356a84/keystone-api/0.log" Jan 23 14:43:22 crc kubenswrapper[4775]: I0123 14:43:22.180735 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_memcached-0_2e1f7aa1-1780-4ccb-b1a5-66b9b279d555/memcached/0.log" Jan 23 14:43:22 crc kubenswrapper[4775]: I0123 14:43:22.733477 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-api-31e4-account-create-update-2rd2s_95df8848-8035-4302-9689-db060f7d4148/mariadb-account-create-update/0.log" Jan 23 14:43:23 crc kubenswrapper[4775]: I0123 14:43:23.289907 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-api-db-create-bfq79_98b564d3-5399-47b6-9397-4c3b006f9e13/mariadb-database-create/0.log" Jan 23 14:43:23 crc kubenswrapper[4775]: I0123 14:43:23.814582 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-cell0-db-create-d8kgs_603674a6-1055-4e27-b370-2b57865ebc55/mariadb-database-create/0.log" Jan 23 14:43:24 crc kubenswrapper[4775]: I0123 14:43:24.287003 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-cell0-f1e1-account-create-update-8ng7h_48eb2aff-1769-415f-b284-8d0cbf32a4e9/mariadb-account-create-update/0.log" Jan 23 14:43:24 crc kubenswrapper[4775]: I0123 14:43:24.809417 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-cell1-574a-account-create-update-mjhg8_15c2fb30-3be5-4e47-b2d3-8fbd54665494/mariadb-account-create-update/0.log" Jan 23 14:43:25 crc kubenswrapper[4775]: I0123 14:43:25.270067 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-cell1-db-create-82jzj_891c1a15-7b44-4c8f-be11-d06333a1d0d1/mariadb-database-create/0.log" Jan 23 14:43:25 crc kubenswrapper[4775]: I0123 14:43:25.866345 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-api-0_56066bf2-4408-46e5-8df0-6ce62447bf2a/nova-kuttl-api-log/0.log" Jan 23 14:43:26 crc kubenswrapper[4775]: I0123 14:43:26.446186 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell0-cell-mapping-qxjlc_a194a858-8c18-41e1-9a10-428397753ece/nova-manage/0.log" Jan 23 14:43:27 crc kubenswrapper[4775]: I0123 14:43:27.061201 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell0-conductor-0_fab3b1c6-093c-4891-957c-fad86eb8fd31/nova-kuttl-cell0-conductor-conductor/0.log" Jan 23 14:43:27 crc kubenswrapper[4775]: I0123 14:43:27.640599 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell0-conductor-db-sync-2l6n8_12f70e17-ec31-43fc-ac56-d1742f962de5/nova-kuttl-cell0-conductor-db-sync/0.log" Jan 23 14:43:27 crc kubenswrapper[4775]: I0123 14:43:27.715068 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:43:27 crc kubenswrapper[4775]: E0123 14:43:27.716097 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:43:28 crc kubenswrapper[4775]: I0123 14:43:28.202110 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-cell-mapping-4gfb8_3ef19dc5-1d78-479c-8220-340c46c44bdf/nova-manage/0.log" Jan 23 14:43:28 crc kubenswrapper[4775]: I0123 14:43:28.765175 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-conductor-0_1fd448a3-6897-490f-9c92-98590cee53ca/nova-kuttl-cell1-conductor-conductor/0.log" Jan 23 14:43:29 crc kubenswrapper[4775]: I0123 14:43:29.402795 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-conductor-db-sync-sjz5r_263d2fcc-c533-4291-8e78-d8e9a2ee2894/nova-kuttl-cell1-conductor-db-sync/0.log" Jan 23 14:43:30 crc kubenswrapper[4775]: I0123 14:43:30.019879 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-novncproxy-0_cb15b357-f464-4e43-a038-3b9e72455d49/nova-kuttl-cell1-novncproxy-novncproxy/0.log" Jan 23 14:43:30 crc kubenswrapper[4775]: I0123 14:43:30.602320 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-metadata-0_72d0a843-11de-43a6-9c92-6a65a6d406ec/nova-kuttl-metadata-log/0.log" Jan 23 14:43:31 crc kubenswrapper[4775]: I0123 14:43:31.101733 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-scheduler-0_bdfa6b38-3f0a-4f8e-9bd4-ec3907a919f0/nova-kuttl-scheduler-scheduler/0.log" Jan 23 14:43:31 crc kubenswrapper[4775]: I0123 14:43:31.577232 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-cell1-galera-0_481cbe1b-2796-4ad2-a342-3661afa62383/galera/0.log" Jan 23 14:43:32 crc kubenswrapper[4775]: I0123 14:43:32.117733 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-galera-0_372c512d-5894-49da-ae1e-cb3e54aadacc/galera/0.log" Jan 23 14:43:32 crc kubenswrapper[4775]: I0123 14:43:32.581680 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstackclient_76733f2d-491c-45dd-bcf5-1a4423019717/openstackclient/0.log" Jan 23 14:43:33 crc kubenswrapper[4775]: I0123 14:43:33.139277 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-7787b67bb8-psq7t_6b653824-2e32-431a-8b16-f8687610c0fe/placement-log/0.log" Jan 23 14:43:33 crc kubenswrapper[4775]: I0123 14:43:33.775462 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-broadcaster-server-0_401a94b6-0628-4cea-b62a-c3229a913d16/rabbitmq/0.log" Jan 23 14:43:34 crc kubenswrapper[4775]: I0123 14:43:34.352103 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-cell1-server-0_4b05c189-a694-4cbc-b679-a974e6bf99bc/rabbitmq/0.log" Jan 23 14:43:34 crc kubenswrapper[4775]: I0123 14:43:34.870458 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-server-0_70288c27-7f95-4843-a8fb-f2ac58ea8e1f/rabbitmq/0.log" Jan 23 14:43:41 crc kubenswrapper[4775]: I0123 14:43:41.714226 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:43:41 crc kubenswrapper[4775]: E0123 14:43:41.715545 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:43:55 crc kubenswrapper[4775]: I0123 14:43:55.714354 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:43:55 crc kubenswrapper[4775]: E0123 14:43:55.715488 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:44:08 crc kubenswrapper[4775]: I0123 14:44:08.713970 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:44:08 crc kubenswrapper[4775]: E0123 14:44:08.714823 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:44:10 crc kubenswrapper[4775]: I0123 14:44:10.539636 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt_100f3a0b-4d11-495f-a6fe-57b196820ee3/extract/0.log" Jan 23 14:44:11 crc kubenswrapper[4775]: I0123 14:44:11.071171 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc_a7025f67-434a-4dba-9b3a-e3b809f5c614/extract/0.log" Jan 23 14:44:11 crc kubenswrapper[4775]: I0123 14:44:11.613485 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-pk9jd_56ee00d0-c0f0-442a-bf4a-7335b62c1c4e/manager/0.log" Jan 23 14:44:12 crc kubenswrapper[4775]: I0123 14:44:12.088986 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-dz7ft_9ce79c2a-2c52-48de-80a6-887d592578d3/manager/0.log" Jan 23 14:44:12 crc kubenswrapper[4775]: I0123 14:44:12.577292 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-ppxmc_352223d5-fa0a-43df-8bad-0eaa9b6b439d/manager/0.log" Jan 23 14:44:13 crc kubenswrapper[4775]: I0123 14:44:13.025471 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-jq89z_64bae0eb-d703-4058-a545-b42d62045b90/manager/0.log" Jan 23 14:44:13 crc kubenswrapper[4775]: I0123 14:44:13.503669 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-xrmvt_841fb528-61a8-445e-a135-be26295bc975/manager/0.log" Jan 23 14:44:13 crc kubenswrapper[4775]: I0123 14:44:13.961423 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-sg9x5_d9e69fcf-58c9-45fe-a291-4628c8219e10/manager/0.log" Jan 23 14:44:14 crc kubenswrapper[4775]: I0123 14:44:14.653081 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-58749ffdfb-mcrj4_5a65a9ef-28c7-46ae-826d-5546af1103a5/manager/0.log" Jan 23 14:44:15 crc kubenswrapper[4775]: I0123 14:44:15.194420 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-f7lm6_d98bebb2-a42a-45a6-b452-a82ce1f62896/manager/0.log" Jan 23 14:44:15 crc kubenswrapper[4775]: I0123 14:44:15.825071 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-bgbpj_0784c928-e0c5-4afb-99cb-4f1f96820a14/manager/0.log" Jan 23 14:44:16 crc kubenswrapper[4775]: I0123 14:44:16.415133 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-pfdc5_853c6152-25bf-4374-a941-f9cd4202c87f/manager/0.log" Jan 23 14:44:17 crc kubenswrapper[4775]: I0123 14:44:17.023505 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-jk8vg_bb6ce8ae-8d3f-4988-9386-6a20487f8ae9/manager/0.log" Jan 23 14:44:17 crc kubenswrapper[4775]: I0123 14:44:17.566137 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-sxkzh_9710b785-e422-4aca-88e8-e88d26d4e724/manager/0.log" Jan 23 14:44:18 crc kubenswrapper[4775]: I0123 14:44:18.674511 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7c5fcc4cc6-wwr78_92377252-2e4d-48bb-95ea-724a4ff5c788/manager/0.log" Jan 23 14:44:19 crc kubenswrapper[4775]: I0123 14:44:19.136871 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-index-x4gqk_78f375c8-5d62-4cbb-b348-8205d476d603/registry-server/0.log" Jan 23 14:44:19 crc kubenswrapper[4775]: I0123 14:44:19.556791 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-vl7m5_a07598ff-60cc-482e-a551-af751575709c/manager/0.log" Jan 23 14:44:20 crc kubenswrapper[4775]: I0123 14:44:20.072463 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854zk48c_44a963d8-d403-42d5-acd2-a0379f07db51/manager/0.log" Jan 23 14:44:21 crc kubenswrapper[4775]: I0123 14:44:21.022254 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-bb8f85db-bkqk9_313b5382-60cf-4627-8ba7-a091fc457989/manager/0.log" Jan 23 14:44:21 crc kubenswrapper[4775]: I0123 14:44:21.498174 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-5czdz_a0ddc210-ca29-42e4-a4c2-a07881434fed/registry-server/0.log" Jan 23 14:44:22 crc kubenswrapper[4775]: I0123 14:44:22.045737 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-xst4r_3d7c7bc6-5124-4cd4-a406-448ca94ba640/manager/0.log" Jan 23 14:44:22 crc kubenswrapper[4775]: I0123 14:44:22.561163 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-n4k5s_072b9a9d-8a08-454c-b1b6-628fcdcc91df/manager/0.log" Jan 23 14:44:22 crc kubenswrapper[4775]: I0123 14:44:22.714294 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:44:22 crc kubenswrapper[4775]: E0123 14:44:22.714726 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:44:23 crc kubenswrapper[4775]: I0123 14:44:23.071517 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-2lhsf_f9da51f1-a035-44b8-9391-0d6018a84c61/operator/0.log" Jan 23 14:44:23 crc kubenswrapper[4775]: I0123 14:44:23.545165 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-nqw74_ecef6080-ea2c-43f4-8ffa-da2ceb59369d/manager/0.log" Jan 23 14:44:24 crc kubenswrapper[4775]: I0123 14:44:24.000954 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-jrhlh_91da96b4-921a-4b88-9804-55745989e08b/manager/0.log" Jan 23 14:44:24 crc kubenswrapper[4775]: I0123 14:44:24.463260 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-xtmz8_9f9597bf-12a1-4204-ac57-37c4c0189687/manager/0.log" Jan 23 14:44:24 crc kubenswrapper[4775]: I0123 14:44:24.971495 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6d9458688d-v8dw9_272dcd84-1bb6-42cb-8c8e-6851f9f031de/manager/0.log" Jan 23 14:44:33 crc kubenswrapper[4775]: I0123 14:44:33.724448 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:44:33 crc kubenswrapper[4775]: E0123 14:44:33.725454 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:44:44 crc kubenswrapper[4775]: I0123 14:44:44.714898 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:44:44 crc kubenswrapper[4775]: E0123 14:44:44.716232 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.504175 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-6vw8s/must-gather-9lvjt"] Jan 23 14:44:56 crc kubenswrapper[4775]: E0123 14:44:56.505192 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerName="nova-manage" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.505212 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerName="nova-manage" Jan 23 14:44:56 crc kubenswrapper[4775]: E0123 14:44:56.505225 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerName="nova-manage" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.505233 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerName="nova-manage" Jan 23 14:44:56 crc kubenswrapper[4775]: E0123 14:44:56.505257 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e9a2482-2cdd-40c0-b4f3-3caeadef05dd" containerName="extract-utilities" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.505267 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e9a2482-2cdd-40c0-b4f3-3caeadef05dd" containerName="extract-utilities" Jan 23 14:44:56 crc kubenswrapper[4775]: E0123 14:44:56.505278 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e9a2482-2cdd-40c0-b4f3-3caeadef05dd" containerName="registry-server" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.505285 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e9a2482-2cdd-40c0-b4f3-3caeadef05dd" containerName="registry-server" Jan 23 14:44:56 crc kubenswrapper[4775]: E0123 14:44:56.505300 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerName="nova-manage" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.505308 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerName="nova-manage" Jan 23 14:44:56 crc kubenswrapper[4775]: E0123 14:44:56.505317 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e9a2482-2cdd-40c0-b4f3-3caeadef05dd" containerName="extract-content" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.505324 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e9a2482-2cdd-40c0-b4f3-3caeadef05dd" containerName="extract-content" Jan 23 14:44:56 crc kubenswrapper[4775]: E0123 14:44:56.505337 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerName="nova-manage" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.505343 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerName="nova-manage" Jan 23 14:44:56 crc kubenswrapper[4775]: E0123 14:44:56.505357 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerName="nova-manage" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.505364 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerName="nova-manage" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.505531 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerName="nova-manage" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.505542 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerName="nova-manage" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.505554 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerName="nova-manage" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.505564 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e9a2482-2cdd-40c0-b4f3-3caeadef05dd" containerName="registry-server" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.505578 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerName="nova-manage" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.505586 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerName="nova-manage" Jan 23 14:44:56 crc kubenswrapper[4775]: E0123 14:44:56.505771 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerName="nova-manage" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.505783 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerName="nova-manage" Jan 23 14:44:56 crc kubenswrapper[4775]: E0123 14:44:56.505820 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerName="nova-manage" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.505830 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerName="nova-manage" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.506022 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerName="nova-manage" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.506036 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e8f7bb4-6671-4ef8-b35a-45059af73b01" containerName="nova-manage" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.506738 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6vw8s/must-gather-9lvjt" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.508687 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-6vw8s"/"kube-root-ca.crt" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.511106 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-6vw8s"/"openshift-service-ca.crt" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.528948 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-6vw8s/must-gather-9lvjt"] Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.571688 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/41dd897c-4a67-4a0a-a7a3-c17b6d05653d-must-gather-output\") pod \"must-gather-9lvjt\" (UID: \"41dd897c-4a67-4a0a-a7a3-c17b6d05653d\") " pod="openshift-must-gather-6vw8s/must-gather-9lvjt" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.571755 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86lv4\" (UniqueName: \"kubernetes.io/projected/41dd897c-4a67-4a0a-a7a3-c17b6d05653d-kube-api-access-86lv4\") pod \"must-gather-9lvjt\" (UID: \"41dd897c-4a67-4a0a-a7a3-c17b6d05653d\") " pod="openshift-must-gather-6vw8s/must-gather-9lvjt" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.672956 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/41dd897c-4a67-4a0a-a7a3-c17b6d05653d-must-gather-output\") pod \"must-gather-9lvjt\" (UID: \"41dd897c-4a67-4a0a-a7a3-c17b6d05653d\") " pod="openshift-must-gather-6vw8s/must-gather-9lvjt" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.673282 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86lv4\" (UniqueName: \"kubernetes.io/projected/41dd897c-4a67-4a0a-a7a3-c17b6d05653d-kube-api-access-86lv4\") pod \"must-gather-9lvjt\" (UID: \"41dd897c-4a67-4a0a-a7a3-c17b6d05653d\") " pod="openshift-must-gather-6vw8s/must-gather-9lvjt" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.673375 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/41dd897c-4a67-4a0a-a7a3-c17b6d05653d-must-gather-output\") pod \"must-gather-9lvjt\" (UID: \"41dd897c-4a67-4a0a-a7a3-c17b6d05653d\") " pod="openshift-must-gather-6vw8s/must-gather-9lvjt" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.697577 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86lv4\" (UniqueName: \"kubernetes.io/projected/41dd897c-4a67-4a0a-a7a3-c17b6d05653d-kube-api-access-86lv4\") pod \"must-gather-9lvjt\" (UID: \"41dd897c-4a67-4a0a-a7a3-c17b6d05653d\") " pod="openshift-must-gather-6vw8s/must-gather-9lvjt" Jan 23 14:44:56 crc kubenswrapper[4775]: I0123 14:44:56.822789 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6vw8s/must-gather-9lvjt" Jan 23 14:44:57 crc kubenswrapper[4775]: I0123 14:44:57.283816 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-6vw8s/must-gather-9lvjt"] Jan 23 14:44:57 crc kubenswrapper[4775]: I0123 14:44:57.287570 4775 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 14:44:57 crc kubenswrapper[4775]: I0123 14:44:57.352748 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6vw8s/must-gather-9lvjt" event={"ID":"41dd897c-4a67-4a0a-a7a3-c17b6d05653d","Type":"ContainerStarted","Data":"99e1c29464fe9bcd176edc66d996f75925c52d9eb2cd0ca1823f89a4e8988e3b"} Jan 23 14:44:57 crc kubenswrapper[4775]: I0123 14:44:57.713517 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:44:57 crc kubenswrapper[4775]: E0123 14:44:57.713819 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:45:00 crc kubenswrapper[4775]: I0123 14:45:00.153583 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486325-8cxr2"] Jan 23 14:45:00 crc kubenswrapper[4775]: I0123 14:45:00.156265 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-8cxr2" Jan 23 14:45:00 crc kubenswrapper[4775]: I0123 14:45:00.159843 4775 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 23 14:45:00 crc kubenswrapper[4775]: I0123 14:45:00.160318 4775 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 23 14:45:00 crc kubenswrapper[4775]: I0123 14:45:00.164564 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486325-8cxr2"] Jan 23 14:45:00 crc kubenswrapper[4775]: I0123 14:45:00.229487 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4wzw\" (UniqueName: \"kubernetes.io/projected/77323df1-44af-4a49-bddf-3448c6d60ef1-kube-api-access-c4wzw\") pod \"collect-profiles-29486325-8cxr2\" (UID: \"77323df1-44af-4a49-bddf-3448c6d60ef1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-8cxr2" Jan 23 14:45:00 crc kubenswrapper[4775]: I0123 14:45:00.229551 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77323df1-44af-4a49-bddf-3448c6d60ef1-config-volume\") pod \"collect-profiles-29486325-8cxr2\" (UID: \"77323df1-44af-4a49-bddf-3448c6d60ef1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-8cxr2" Jan 23 14:45:00 crc kubenswrapper[4775]: I0123 14:45:00.229593 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77323df1-44af-4a49-bddf-3448c6d60ef1-secret-volume\") pod \"collect-profiles-29486325-8cxr2\" (UID: \"77323df1-44af-4a49-bddf-3448c6d60ef1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-8cxr2" Jan 23 14:45:00 crc kubenswrapper[4775]: I0123 14:45:00.331563 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4wzw\" (UniqueName: \"kubernetes.io/projected/77323df1-44af-4a49-bddf-3448c6d60ef1-kube-api-access-c4wzw\") pod \"collect-profiles-29486325-8cxr2\" (UID: \"77323df1-44af-4a49-bddf-3448c6d60ef1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-8cxr2" Jan 23 14:45:00 crc kubenswrapper[4775]: I0123 14:45:00.331641 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77323df1-44af-4a49-bddf-3448c6d60ef1-config-volume\") pod \"collect-profiles-29486325-8cxr2\" (UID: \"77323df1-44af-4a49-bddf-3448c6d60ef1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-8cxr2" Jan 23 14:45:00 crc kubenswrapper[4775]: I0123 14:45:00.331681 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77323df1-44af-4a49-bddf-3448c6d60ef1-secret-volume\") pod \"collect-profiles-29486325-8cxr2\" (UID: \"77323df1-44af-4a49-bddf-3448c6d60ef1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-8cxr2" Jan 23 14:45:00 crc kubenswrapper[4775]: I0123 14:45:00.332784 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77323df1-44af-4a49-bddf-3448c6d60ef1-config-volume\") pod \"collect-profiles-29486325-8cxr2\" (UID: \"77323df1-44af-4a49-bddf-3448c6d60ef1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-8cxr2" Jan 23 14:45:00 crc kubenswrapper[4775]: I0123 14:45:00.338938 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77323df1-44af-4a49-bddf-3448c6d60ef1-secret-volume\") pod \"collect-profiles-29486325-8cxr2\" (UID: \"77323df1-44af-4a49-bddf-3448c6d60ef1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-8cxr2" Jan 23 14:45:00 crc kubenswrapper[4775]: I0123 14:45:00.349307 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4wzw\" (UniqueName: \"kubernetes.io/projected/77323df1-44af-4a49-bddf-3448c6d60ef1-kube-api-access-c4wzw\") pod \"collect-profiles-29486325-8cxr2\" (UID: \"77323df1-44af-4a49-bddf-3448c6d60ef1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-8cxr2" Jan 23 14:45:00 crc kubenswrapper[4775]: I0123 14:45:00.480190 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-8cxr2" Jan 23 14:45:04 crc kubenswrapper[4775]: I0123 14:45:04.357391 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486325-8cxr2"] Jan 23 14:45:04 crc kubenswrapper[4775]: I0123 14:45:04.408113 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-8cxr2" event={"ID":"77323df1-44af-4a49-bddf-3448c6d60ef1","Type":"ContainerStarted","Data":"5bc1737e13d3f09907722fd400db2544dc6c9d6d22f1a34098443b6c8fd3462e"} Jan 23 14:45:04 crc kubenswrapper[4775]: I0123 14:45:04.410458 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6vw8s/must-gather-9lvjt" event={"ID":"41dd897c-4a67-4a0a-a7a3-c17b6d05653d","Type":"ContainerStarted","Data":"3998f4e1023e2b01b3b3037ee3f54b7b541f7dd5b790471a05de169061550d51"} Jan 23 14:45:04 crc kubenswrapper[4775]: I0123 14:45:04.433402 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-6vw8s/must-gather-9lvjt" podStartSLOduration=1.791145347 podStartE2EDuration="8.433383001s" podCreationTimestamp="2026-01-23 14:44:56 +0000 UTC" firstStartedPulling="2026-01-23 14:44:57.287538787 +0000 UTC m=+2444.282367527" lastFinishedPulling="2026-01-23 14:45:03.929776431 +0000 UTC m=+2450.924605181" observedRunningTime="2026-01-23 14:45:04.425474619 +0000 UTC m=+2451.420303369" watchObservedRunningTime="2026-01-23 14:45:04.433383001 +0000 UTC m=+2451.428211751" Jan 23 14:45:05 crc kubenswrapper[4775]: I0123 14:45:05.423295 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6vw8s/must-gather-9lvjt" event={"ID":"41dd897c-4a67-4a0a-a7a3-c17b6d05653d","Type":"ContainerStarted","Data":"9795a40e8b362f20a5bafb6221130232aed660a8237ad820b9b5c489d963be47"} Jan 23 14:45:05 crc kubenswrapper[4775]: I0123 14:45:05.426225 4775 generic.go:334] "Generic (PLEG): container finished" podID="77323df1-44af-4a49-bddf-3448c6d60ef1" containerID="e0b9a370a20bb36d5e9d3347ae55e9688b7c1a75755503e572a5a4c809dc9026" exitCode=0 Jan 23 14:45:05 crc kubenswrapper[4775]: I0123 14:45:05.426312 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-8cxr2" event={"ID":"77323df1-44af-4a49-bddf-3448c6d60ef1","Type":"ContainerDied","Data":"e0b9a370a20bb36d5e9d3347ae55e9688b7c1a75755503e572a5a4c809dc9026"} Jan 23 14:45:06 crc kubenswrapper[4775]: I0123 14:45:06.816987 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-8cxr2" Jan 23 14:45:06 crc kubenswrapper[4775]: I0123 14:45:06.961571 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77323df1-44af-4a49-bddf-3448c6d60ef1-secret-volume\") pod \"77323df1-44af-4a49-bddf-3448c6d60ef1\" (UID: \"77323df1-44af-4a49-bddf-3448c6d60ef1\") " Jan 23 14:45:06 crc kubenswrapper[4775]: I0123 14:45:06.961627 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4wzw\" (UniqueName: \"kubernetes.io/projected/77323df1-44af-4a49-bddf-3448c6d60ef1-kube-api-access-c4wzw\") pod \"77323df1-44af-4a49-bddf-3448c6d60ef1\" (UID: \"77323df1-44af-4a49-bddf-3448c6d60ef1\") " Jan 23 14:45:06 crc kubenswrapper[4775]: I0123 14:45:06.961686 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77323df1-44af-4a49-bddf-3448c6d60ef1-config-volume\") pod \"77323df1-44af-4a49-bddf-3448c6d60ef1\" (UID: \"77323df1-44af-4a49-bddf-3448c6d60ef1\") " Jan 23 14:45:06 crc kubenswrapper[4775]: I0123 14:45:06.962491 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77323df1-44af-4a49-bddf-3448c6d60ef1-config-volume" (OuterVolumeSpecName: "config-volume") pod "77323df1-44af-4a49-bddf-3448c6d60ef1" (UID: "77323df1-44af-4a49-bddf-3448c6d60ef1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 23 14:45:06 crc kubenswrapper[4775]: I0123 14:45:06.970007 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77323df1-44af-4a49-bddf-3448c6d60ef1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "77323df1-44af-4a49-bddf-3448c6d60ef1" (UID: "77323df1-44af-4a49-bddf-3448c6d60ef1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 23 14:45:06 crc kubenswrapper[4775]: I0123 14:45:06.980375 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77323df1-44af-4a49-bddf-3448c6d60ef1-kube-api-access-c4wzw" (OuterVolumeSpecName: "kube-api-access-c4wzw") pod "77323df1-44af-4a49-bddf-3448c6d60ef1" (UID: "77323df1-44af-4a49-bddf-3448c6d60ef1"). InnerVolumeSpecName "kube-api-access-c4wzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:45:07 crc kubenswrapper[4775]: I0123 14:45:07.063667 4775 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/77323df1-44af-4a49-bddf-3448c6d60ef1-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 14:45:07 crc kubenswrapper[4775]: I0123 14:45:07.063704 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4wzw\" (UniqueName: \"kubernetes.io/projected/77323df1-44af-4a49-bddf-3448c6d60ef1-kube-api-access-c4wzw\") on node \"crc\" DevicePath \"\"" Jan 23 14:45:07 crc kubenswrapper[4775]: I0123 14:45:07.063715 4775 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77323df1-44af-4a49-bddf-3448c6d60ef1-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 14:45:07 crc kubenswrapper[4775]: I0123 14:45:07.442781 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-8cxr2" event={"ID":"77323df1-44af-4a49-bddf-3448c6d60ef1","Type":"ContainerDied","Data":"5bc1737e13d3f09907722fd400db2544dc6c9d6d22f1a34098443b6c8fd3462e"} Jan 23 14:45:07 crc kubenswrapper[4775]: I0123 14:45:07.442841 4775 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bc1737e13d3f09907722fd400db2544dc6c9d6d22f1a34098443b6c8fd3462e" Jan 23 14:45:07 crc kubenswrapper[4775]: I0123 14:45:07.442846 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486325-8cxr2" Jan 23 14:45:07 crc kubenswrapper[4775]: I0123 14:45:07.894502 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b"] Jan 23 14:45:07 crc kubenswrapper[4775]: I0123 14:45:07.899247 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486280-gf96b"] Jan 23 14:45:09 crc kubenswrapper[4775]: I0123 14:45:09.723063 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d6b6f17-bb56-49ba-8487-6e07346780a1" path="/var/lib/kubelet/pods/2d6b6f17-bb56-49ba-8487-6e07346780a1/volumes" Jan 23 14:45:11 crc kubenswrapper[4775]: I0123 14:45:11.713630 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:45:11 crc kubenswrapper[4775]: E0123 14:45:11.713872 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:45:18 crc kubenswrapper[4775]: I0123 14:45:18.593725 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b72df"] Jan 23 14:45:18 crc kubenswrapper[4775]: E0123 14:45:18.594281 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77323df1-44af-4a49-bddf-3448c6d60ef1" containerName="collect-profiles" Jan 23 14:45:18 crc kubenswrapper[4775]: I0123 14:45:18.594293 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="77323df1-44af-4a49-bddf-3448c6d60ef1" containerName="collect-profiles" Jan 23 14:45:18 crc kubenswrapper[4775]: I0123 14:45:18.594422 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="77323df1-44af-4a49-bddf-3448c6d60ef1" containerName="collect-profiles" Jan 23 14:45:18 crc kubenswrapper[4775]: I0123 14:45:18.595491 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b72df" Jan 23 14:45:18 crc kubenswrapper[4775]: I0123 14:45:18.612511 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b72df"] Jan 23 14:45:18 crc kubenswrapper[4775]: I0123 14:45:18.628654 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brqv9\" (UniqueName: \"kubernetes.io/projected/721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8-kube-api-access-brqv9\") pod \"redhat-marketplace-b72df\" (UID: \"721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8\") " pod="openshift-marketplace/redhat-marketplace-b72df" Jan 23 14:45:18 crc kubenswrapper[4775]: I0123 14:45:18.628934 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8-catalog-content\") pod \"redhat-marketplace-b72df\" (UID: \"721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8\") " pod="openshift-marketplace/redhat-marketplace-b72df" Jan 23 14:45:18 crc kubenswrapper[4775]: I0123 14:45:18.629087 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8-utilities\") pod \"redhat-marketplace-b72df\" (UID: \"721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8\") " pod="openshift-marketplace/redhat-marketplace-b72df" Jan 23 14:45:18 crc kubenswrapper[4775]: I0123 14:45:18.730607 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8-catalog-content\") pod \"redhat-marketplace-b72df\" (UID: \"721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8\") " pod="openshift-marketplace/redhat-marketplace-b72df" Jan 23 14:45:18 crc kubenswrapper[4775]: I0123 14:45:18.730709 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8-utilities\") pod \"redhat-marketplace-b72df\" (UID: \"721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8\") " pod="openshift-marketplace/redhat-marketplace-b72df" Jan 23 14:45:18 crc kubenswrapper[4775]: I0123 14:45:18.730795 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brqv9\" (UniqueName: \"kubernetes.io/projected/721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8-kube-api-access-brqv9\") pod \"redhat-marketplace-b72df\" (UID: \"721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8\") " pod="openshift-marketplace/redhat-marketplace-b72df" Jan 23 14:45:18 crc kubenswrapper[4775]: I0123 14:45:18.731564 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8-catalog-content\") pod \"redhat-marketplace-b72df\" (UID: \"721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8\") " pod="openshift-marketplace/redhat-marketplace-b72df" Jan 23 14:45:18 crc kubenswrapper[4775]: I0123 14:45:18.734829 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8-utilities\") pod \"redhat-marketplace-b72df\" (UID: \"721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8\") " pod="openshift-marketplace/redhat-marketplace-b72df" Jan 23 14:45:18 crc kubenswrapper[4775]: I0123 14:45:18.750477 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brqv9\" (UniqueName: \"kubernetes.io/projected/721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8-kube-api-access-brqv9\") pod \"redhat-marketplace-b72df\" (UID: \"721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8\") " pod="openshift-marketplace/redhat-marketplace-b72df" Jan 23 14:45:18 crc kubenswrapper[4775]: I0123 14:45:18.916199 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b72df" Jan 23 14:45:19 crc kubenswrapper[4775]: I0123 14:45:19.187719 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b72df"] Jan 23 14:45:19 crc kubenswrapper[4775]: E0123 14:45:19.502586 4775 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod721aa0ee_a7d9_4b8c_abb6_d0d6bcf2d4e8.slice/crio-conmon-a5e16b68c10a9969a5e16a2a094fc129c91f273289373387d352a062263279dc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod721aa0ee_a7d9_4b8c_abb6_d0d6bcf2d4e8.slice/crio-a5e16b68c10a9969a5e16a2a094fc129c91f273289373387d352a062263279dc.scope\": RecentStats: unable to find data in memory cache]" Jan 23 14:45:19 crc kubenswrapper[4775]: I0123 14:45:19.522772 4775 generic.go:334] "Generic (PLEG): container finished" podID="721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8" containerID="a5e16b68c10a9969a5e16a2a094fc129c91f273289373387d352a062263279dc" exitCode=0 Jan 23 14:45:19 crc kubenswrapper[4775]: I0123 14:45:19.522845 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b72df" event={"ID":"721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8","Type":"ContainerDied","Data":"a5e16b68c10a9969a5e16a2a094fc129c91f273289373387d352a062263279dc"} Jan 23 14:45:19 crc kubenswrapper[4775]: I0123 14:45:19.523111 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b72df" event={"ID":"721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8","Type":"ContainerStarted","Data":"1876587843ba1a3129234b73c3573e75d8bdb5bd737183e524a0c5243824c914"} Jan 23 14:45:20 crc kubenswrapper[4775]: I0123 14:45:20.534735 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b72df" event={"ID":"721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8","Type":"ContainerStarted","Data":"d490002dcf01c76830fe6869be1af01ccffbf1daf8a5c956ecf293f43ed68369"} Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.546794 4775 generic.go:334] "Generic (PLEG): container finished" podID="721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8" containerID="d490002dcf01c76830fe6869be1af01ccffbf1daf8a5c956ecf293f43ed68369" exitCode=0 Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.546922 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b72df" event={"ID":"721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8","Type":"ContainerDied","Data":"d490002dcf01c76830fe6869be1af01ccffbf1daf8a5c956ecf293f43ed68369"} Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.628855 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lq9jn"] Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.630768 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lq9jn" Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.646048 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lq9jn"] Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.677983 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5820a548-636b-4a69-b8d6-b947ee11e3fd-catalog-content\") pod \"certified-operators-lq9jn\" (UID: \"5820a548-636b-4a69-b8d6-b947ee11e3fd\") " pod="openshift-marketplace/certified-operators-lq9jn" Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.678053 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5820a548-636b-4a69-b8d6-b947ee11e3fd-utilities\") pod \"certified-operators-lq9jn\" (UID: \"5820a548-636b-4a69-b8d6-b947ee11e3fd\") " pod="openshift-marketplace/certified-operators-lq9jn" Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.678127 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94wnx\" (UniqueName: \"kubernetes.io/projected/5820a548-636b-4a69-b8d6-b947ee11e3fd-kube-api-access-94wnx\") pod \"certified-operators-lq9jn\" (UID: \"5820a548-636b-4a69-b8d6-b947ee11e3fd\") " pod="openshift-marketplace/certified-operators-lq9jn" Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.779173 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5820a548-636b-4a69-b8d6-b947ee11e3fd-catalog-content\") pod \"certified-operators-lq9jn\" (UID: \"5820a548-636b-4a69-b8d6-b947ee11e3fd\") " pod="openshift-marketplace/certified-operators-lq9jn" Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.779240 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5820a548-636b-4a69-b8d6-b947ee11e3fd-utilities\") pod \"certified-operators-lq9jn\" (UID: \"5820a548-636b-4a69-b8d6-b947ee11e3fd\") " pod="openshift-marketplace/certified-operators-lq9jn" Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.779319 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94wnx\" (UniqueName: \"kubernetes.io/projected/5820a548-636b-4a69-b8d6-b947ee11e3fd-kube-api-access-94wnx\") pod \"certified-operators-lq9jn\" (UID: \"5820a548-636b-4a69-b8d6-b947ee11e3fd\") " pod="openshift-marketplace/certified-operators-lq9jn" Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.779629 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5820a548-636b-4a69-b8d6-b947ee11e3fd-catalog-content\") pod \"certified-operators-lq9jn\" (UID: \"5820a548-636b-4a69-b8d6-b947ee11e3fd\") " pod="openshift-marketplace/certified-operators-lq9jn" Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.779722 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5820a548-636b-4a69-b8d6-b947ee11e3fd-utilities\") pod \"certified-operators-lq9jn\" (UID: \"5820a548-636b-4a69-b8d6-b947ee11e3fd\") " pod="openshift-marketplace/certified-operators-lq9jn" Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.792901 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dx9cv"] Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.794733 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dx9cv" Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.802842 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94wnx\" (UniqueName: \"kubernetes.io/projected/5820a548-636b-4a69-b8d6-b947ee11e3fd-kube-api-access-94wnx\") pod \"certified-operators-lq9jn\" (UID: \"5820a548-636b-4a69-b8d6-b947ee11e3fd\") " pod="openshift-marketplace/certified-operators-lq9jn" Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.812308 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dx9cv"] Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.880995 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a0e2681-58a7-4050-9dd0-3b0d77bdde6c-utilities\") pod \"community-operators-dx9cv\" (UID: \"9a0e2681-58a7-4050-9dd0-3b0d77bdde6c\") " pod="openshift-marketplace/community-operators-dx9cv" Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.881086 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a0e2681-58a7-4050-9dd0-3b0d77bdde6c-catalog-content\") pod \"community-operators-dx9cv\" (UID: \"9a0e2681-58a7-4050-9dd0-3b0d77bdde6c\") " pod="openshift-marketplace/community-operators-dx9cv" Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.881131 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkfqb\" (UniqueName: \"kubernetes.io/projected/9a0e2681-58a7-4050-9dd0-3b0d77bdde6c-kube-api-access-lkfqb\") pod \"community-operators-dx9cv\" (UID: \"9a0e2681-58a7-4050-9dd0-3b0d77bdde6c\") " pod="openshift-marketplace/community-operators-dx9cv" Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.972774 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lq9jn" Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.982682 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a0e2681-58a7-4050-9dd0-3b0d77bdde6c-catalog-content\") pod \"community-operators-dx9cv\" (UID: \"9a0e2681-58a7-4050-9dd0-3b0d77bdde6c\") " pod="openshift-marketplace/community-operators-dx9cv" Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.982745 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkfqb\" (UniqueName: \"kubernetes.io/projected/9a0e2681-58a7-4050-9dd0-3b0d77bdde6c-kube-api-access-lkfqb\") pod \"community-operators-dx9cv\" (UID: \"9a0e2681-58a7-4050-9dd0-3b0d77bdde6c\") " pod="openshift-marketplace/community-operators-dx9cv" Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.982845 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a0e2681-58a7-4050-9dd0-3b0d77bdde6c-utilities\") pod \"community-operators-dx9cv\" (UID: \"9a0e2681-58a7-4050-9dd0-3b0d77bdde6c\") " pod="openshift-marketplace/community-operators-dx9cv" Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.983429 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a0e2681-58a7-4050-9dd0-3b0d77bdde6c-catalog-content\") pod \"community-operators-dx9cv\" (UID: \"9a0e2681-58a7-4050-9dd0-3b0d77bdde6c\") " pod="openshift-marketplace/community-operators-dx9cv" Jan 23 14:45:21 crc kubenswrapper[4775]: I0123 14:45:21.983474 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a0e2681-58a7-4050-9dd0-3b0d77bdde6c-utilities\") pod \"community-operators-dx9cv\" (UID: \"9a0e2681-58a7-4050-9dd0-3b0d77bdde6c\") " pod="openshift-marketplace/community-operators-dx9cv" Jan 23 14:45:22 crc kubenswrapper[4775]: I0123 14:45:22.002464 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkfqb\" (UniqueName: \"kubernetes.io/projected/9a0e2681-58a7-4050-9dd0-3b0d77bdde6c-kube-api-access-lkfqb\") pod \"community-operators-dx9cv\" (UID: \"9a0e2681-58a7-4050-9dd0-3b0d77bdde6c\") " pod="openshift-marketplace/community-operators-dx9cv" Jan 23 14:45:22 crc kubenswrapper[4775]: I0123 14:45:22.140895 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dx9cv" Jan 23 14:45:22 crc kubenswrapper[4775]: I0123 14:45:22.263289 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lq9jn"] Jan 23 14:45:22 crc kubenswrapper[4775]: I0123 14:45:22.555121 4775 generic.go:334] "Generic (PLEG): container finished" podID="5820a548-636b-4a69-b8d6-b947ee11e3fd" containerID="4520cae67722951660081decece3745c0abd896e4df9ffd0b009c00188cac1ea" exitCode=0 Jan 23 14:45:22 crc kubenswrapper[4775]: I0123 14:45:22.555231 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lq9jn" event={"ID":"5820a548-636b-4a69-b8d6-b947ee11e3fd","Type":"ContainerDied","Data":"4520cae67722951660081decece3745c0abd896e4df9ffd0b009c00188cac1ea"} Jan 23 14:45:22 crc kubenswrapper[4775]: I0123 14:45:22.555497 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lq9jn" event={"ID":"5820a548-636b-4a69-b8d6-b947ee11e3fd","Type":"ContainerStarted","Data":"3994761e9ce661e4a34641df7fae6257581617303ab3ad370a636bae72fa58e7"} Jan 23 14:45:22 crc kubenswrapper[4775]: I0123 14:45:22.560904 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b72df" event={"ID":"721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8","Type":"ContainerStarted","Data":"382ed8905708faff102d9d1639d93d5308178de011ad542072e24ecf75b61466"} Jan 23 14:45:22 crc kubenswrapper[4775]: I0123 14:45:22.600196 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b72df" podStartSLOduration=2.102605028 podStartE2EDuration="4.600176317s" podCreationTimestamp="2026-01-23 14:45:18 +0000 UTC" firstStartedPulling="2026-01-23 14:45:19.5249901 +0000 UTC m=+2466.519818850" lastFinishedPulling="2026-01-23 14:45:22.022561399 +0000 UTC m=+2469.017390139" observedRunningTime="2026-01-23 14:45:22.595236808 +0000 UTC m=+2469.590065558" watchObservedRunningTime="2026-01-23 14:45:22.600176317 +0000 UTC m=+2469.595005057" Jan 23 14:45:22 crc kubenswrapper[4775]: I0123 14:45:22.713580 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:45:22 crc kubenswrapper[4775]: E0123 14:45:22.713824 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:45:22 crc kubenswrapper[4775]: W0123 14:45:22.744185 4775 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a0e2681_58a7_4050_9dd0_3b0d77bdde6c.slice/crio-22f3073770a1d6de80d758482e7424eaebf2c01e4873d8bad8fa60b2ece1d9e7 WatchSource:0}: Error finding container 22f3073770a1d6de80d758482e7424eaebf2c01e4873d8bad8fa60b2ece1d9e7: Status 404 returned error can't find the container with id 22f3073770a1d6de80d758482e7424eaebf2c01e4873d8bad8fa60b2ece1d9e7 Jan 23 14:45:22 crc kubenswrapper[4775]: I0123 14:45:22.744655 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dx9cv"] Jan 23 14:45:23 crc kubenswrapper[4775]: I0123 14:45:23.569087 4775 generic.go:334] "Generic (PLEG): container finished" podID="9a0e2681-58a7-4050-9dd0-3b0d77bdde6c" containerID="b1a958b7ba0a68879426427846a7151029fe0ce0287070b144b203617336347b" exitCode=0 Jan 23 14:45:23 crc kubenswrapper[4775]: I0123 14:45:23.569146 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dx9cv" event={"ID":"9a0e2681-58a7-4050-9dd0-3b0d77bdde6c","Type":"ContainerDied","Data":"b1a958b7ba0a68879426427846a7151029fe0ce0287070b144b203617336347b"} Jan 23 14:45:23 crc kubenswrapper[4775]: I0123 14:45:23.569569 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dx9cv" event={"ID":"9a0e2681-58a7-4050-9dd0-3b0d77bdde6c","Type":"ContainerStarted","Data":"22f3073770a1d6de80d758482e7424eaebf2c01e4873d8bad8fa60b2ece1d9e7"} Jan 23 14:45:24 crc kubenswrapper[4775]: I0123 14:45:24.580622 4775 generic.go:334] "Generic (PLEG): container finished" podID="5820a548-636b-4a69-b8d6-b947ee11e3fd" containerID="66381161ea8e3e8b4f98c07e994e28deb923fe56808c28c223ee02a3a51be123" exitCode=0 Jan 23 14:45:24 crc kubenswrapper[4775]: I0123 14:45:24.580681 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lq9jn" event={"ID":"5820a548-636b-4a69-b8d6-b947ee11e3fd","Type":"ContainerDied","Data":"66381161ea8e3e8b4f98c07e994e28deb923fe56808c28c223ee02a3a51be123"} Jan 23 14:45:24 crc kubenswrapper[4775]: I0123 14:45:24.584542 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dx9cv" event={"ID":"9a0e2681-58a7-4050-9dd0-3b0d77bdde6c","Type":"ContainerStarted","Data":"5ecdadc895819a7d51d81998b7771e7fd9d02ea2237f791eeda62b2bfd242c2e"} Jan 23 14:45:25 crc kubenswrapper[4775]: I0123 14:45:25.594740 4775 generic.go:334] "Generic (PLEG): container finished" podID="9a0e2681-58a7-4050-9dd0-3b0d77bdde6c" containerID="5ecdadc895819a7d51d81998b7771e7fd9d02ea2237f791eeda62b2bfd242c2e" exitCode=0 Jan 23 14:45:25 crc kubenswrapper[4775]: I0123 14:45:25.594792 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dx9cv" event={"ID":"9a0e2681-58a7-4050-9dd0-3b0d77bdde6c","Type":"ContainerDied","Data":"5ecdadc895819a7d51d81998b7771e7fd9d02ea2237f791eeda62b2bfd242c2e"} Jan 23 14:45:26 crc kubenswrapper[4775]: I0123 14:45:26.605678 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lq9jn" event={"ID":"5820a548-636b-4a69-b8d6-b947ee11e3fd","Type":"ContainerStarted","Data":"b69437c4162f77acf88b1d79a4f540a54a2a84bc513d0ed39faac3250e860c26"} Jan 23 14:45:26 crc kubenswrapper[4775]: I0123 14:45:26.634448 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lq9jn" podStartSLOduration=2.743281212 podStartE2EDuration="5.634432652s" podCreationTimestamp="2026-01-23 14:45:21 +0000 UTC" firstStartedPulling="2026-01-23 14:45:22.557259431 +0000 UTC m=+2469.552088171" lastFinishedPulling="2026-01-23 14:45:25.448410831 +0000 UTC m=+2472.443239611" observedRunningTime="2026-01-23 14:45:26.632709454 +0000 UTC m=+2473.627538214" watchObservedRunningTime="2026-01-23 14:45:26.634432652 +0000 UTC m=+2473.629261392" Jan 23 14:45:27 crc kubenswrapper[4775]: I0123 14:45:27.618991 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dx9cv" event={"ID":"9a0e2681-58a7-4050-9dd0-3b0d77bdde6c","Type":"ContainerStarted","Data":"344024f9787414d7f0930477d4a12f685db8d20254b2af6053620d5037722c54"} Jan 23 14:45:27 crc kubenswrapper[4775]: I0123 14:45:27.641616 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dx9cv" podStartSLOduration=3.605961506 podStartE2EDuration="6.641589132s" podCreationTimestamp="2026-01-23 14:45:21 +0000 UTC" firstStartedPulling="2026-01-23 14:45:23.570332547 +0000 UTC m=+2470.565161287" lastFinishedPulling="2026-01-23 14:45:26.605960173 +0000 UTC m=+2473.600788913" observedRunningTime="2026-01-23 14:45:27.638249878 +0000 UTC m=+2474.633078648" watchObservedRunningTime="2026-01-23 14:45:27.641589132 +0000 UTC m=+2474.636417902" Jan 23 14:45:28 crc kubenswrapper[4775]: I0123 14:45:28.916541 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b72df" Jan 23 14:45:28 crc kubenswrapper[4775]: I0123 14:45:28.916861 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b72df" Jan 23 14:45:28 crc kubenswrapper[4775]: I0123 14:45:28.965329 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b72df" Jan 23 14:45:29 crc kubenswrapper[4775]: I0123 14:45:29.676175 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b72df" Jan 23 14:45:30 crc kubenswrapper[4775]: I0123 14:45:30.989327 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b72df"] Jan 23 14:45:31 crc kubenswrapper[4775]: I0123 14:45:31.648561 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b72df" podUID="721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8" containerName="registry-server" containerID="cri-o://382ed8905708faff102d9d1639d93d5308178de011ad542072e24ecf75b61466" gracePeriod=2 Jan 23 14:45:31 crc kubenswrapper[4775]: I0123 14:45:31.972958 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lq9jn" Jan 23 14:45:31 crc kubenswrapper[4775]: I0123 14:45:31.983124 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lq9jn" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.026445 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lq9jn" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.053157 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b72df" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.141614 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dx9cv" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.141664 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dx9cv" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.185444 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dx9cv" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.255053 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8-catalog-content\") pod \"721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8\" (UID: \"721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8\") " Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.255169 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8-utilities\") pod \"721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8\" (UID: \"721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8\") " Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.255297 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brqv9\" (UniqueName: \"kubernetes.io/projected/721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8-kube-api-access-brqv9\") pod \"721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8\" (UID: \"721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8\") " Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.256045 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8-utilities" (OuterVolumeSpecName: "utilities") pod "721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8" (UID: "721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.268123 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8-kube-api-access-brqv9" (OuterVolumeSpecName: "kube-api-access-brqv9") pod "721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8" (UID: "721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8"). InnerVolumeSpecName "kube-api-access-brqv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.281985 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8" (UID: "721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.357749 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.357784 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brqv9\" (UniqueName: \"kubernetes.io/projected/721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8-kube-api-access-brqv9\") on node \"crc\" DevicePath \"\"" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.357815 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.664137 4775 generic.go:334] "Generic (PLEG): container finished" podID="721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8" containerID="382ed8905708faff102d9d1639d93d5308178de011ad542072e24ecf75b61466" exitCode=0 Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.664509 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b72df" event={"ID":"721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8","Type":"ContainerDied","Data":"382ed8905708faff102d9d1639d93d5308178de011ad542072e24ecf75b61466"} Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.664598 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b72df" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.664600 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b72df" event={"ID":"721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8","Type":"ContainerDied","Data":"1876587843ba1a3129234b73c3573e75d8bdb5bd737183e524a0c5243824c914"} Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.664624 4775 scope.go:117] "RemoveContainer" containerID="382ed8905708faff102d9d1639d93d5308178de011ad542072e24ecf75b61466" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.710516 4775 scope.go:117] "RemoveContainer" containerID="d490002dcf01c76830fe6869be1af01ccffbf1daf8a5c956ecf293f43ed68369" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.718527 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b72df"] Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.725345 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b72df"] Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.738839 4775 scope.go:117] "RemoveContainer" containerID="a5e16b68c10a9969a5e16a2a094fc129c91f273289373387d352a062263279dc" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.756733 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lq9jn" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.761533 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dx9cv" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.785557 4775 scope.go:117] "RemoveContainer" containerID="382ed8905708faff102d9d1639d93d5308178de011ad542072e24ecf75b61466" Jan 23 14:45:32 crc kubenswrapper[4775]: E0123 14:45:32.786231 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"382ed8905708faff102d9d1639d93d5308178de011ad542072e24ecf75b61466\": container with ID starting with 382ed8905708faff102d9d1639d93d5308178de011ad542072e24ecf75b61466 not found: ID does not exist" containerID="382ed8905708faff102d9d1639d93d5308178de011ad542072e24ecf75b61466" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.786276 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"382ed8905708faff102d9d1639d93d5308178de011ad542072e24ecf75b61466"} err="failed to get container status \"382ed8905708faff102d9d1639d93d5308178de011ad542072e24ecf75b61466\": rpc error: code = NotFound desc = could not find container \"382ed8905708faff102d9d1639d93d5308178de011ad542072e24ecf75b61466\": container with ID starting with 382ed8905708faff102d9d1639d93d5308178de011ad542072e24ecf75b61466 not found: ID does not exist" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.786310 4775 scope.go:117] "RemoveContainer" containerID="d490002dcf01c76830fe6869be1af01ccffbf1daf8a5c956ecf293f43ed68369" Jan 23 14:45:32 crc kubenswrapper[4775]: E0123 14:45:32.790034 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d490002dcf01c76830fe6869be1af01ccffbf1daf8a5c956ecf293f43ed68369\": container with ID starting with d490002dcf01c76830fe6869be1af01ccffbf1daf8a5c956ecf293f43ed68369 not found: ID does not exist" containerID="d490002dcf01c76830fe6869be1af01ccffbf1daf8a5c956ecf293f43ed68369" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.790101 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d490002dcf01c76830fe6869be1af01ccffbf1daf8a5c956ecf293f43ed68369"} err="failed to get container status \"d490002dcf01c76830fe6869be1af01ccffbf1daf8a5c956ecf293f43ed68369\": rpc error: code = NotFound desc = could not find container \"d490002dcf01c76830fe6869be1af01ccffbf1daf8a5c956ecf293f43ed68369\": container with ID starting with d490002dcf01c76830fe6869be1af01ccffbf1daf8a5c956ecf293f43ed68369 not found: ID does not exist" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.790138 4775 scope.go:117] "RemoveContainer" containerID="a5e16b68c10a9969a5e16a2a094fc129c91f273289373387d352a062263279dc" Jan 23 14:45:32 crc kubenswrapper[4775]: E0123 14:45:32.792006 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5e16b68c10a9969a5e16a2a094fc129c91f273289373387d352a062263279dc\": container with ID starting with a5e16b68c10a9969a5e16a2a094fc129c91f273289373387d352a062263279dc not found: ID does not exist" containerID="a5e16b68c10a9969a5e16a2a094fc129c91f273289373387d352a062263279dc" Jan 23 14:45:32 crc kubenswrapper[4775]: I0123 14:45:32.792054 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5e16b68c10a9969a5e16a2a094fc129c91f273289373387d352a062263279dc"} err="failed to get container status \"a5e16b68c10a9969a5e16a2a094fc129c91f273289373387d352a062263279dc\": rpc error: code = NotFound desc = could not find container \"a5e16b68c10a9969a5e16a2a094fc129c91f273289373387d352a062263279dc\": container with ID starting with a5e16b68c10a9969a5e16a2a094fc129c91f273289373387d352a062263279dc not found: ID does not exist" Jan 23 14:45:33 crc kubenswrapper[4775]: I0123 14:45:33.730041 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8" path="/var/lib/kubelet/pods/721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8/volumes" Jan 23 14:45:34 crc kubenswrapper[4775]: I0123 14:45:34.714217 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:45:34 crc kubenswrapper[4775]: E0123 14:45:34.714577 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:45:34 crc kubenswrapper[4775]: I0123 14:45:34.785666 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dx9cv"] Jan 23 14:45:34 crc kubenswrapper[4775]: I0123 14:45:34.786027 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dx9cv" podUID="9a0e2681-58a7-4050-9dd0-3b0d77bdde6c" containerName="registry-server" containerID="cri-o://344024f9787414d7f0930477d4a12f685db8d20254b2af6053620d5037722c54" gracePeriod=2 Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.196422 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dx9cv" Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.302886 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a0e2681-58a7-4050-9dd0-3b0d77bdde6c-catalog-content\") pod \"9a0e2681-58a7-4050-9dd0-3b0d77bdde6c\" (UID: \"9a0e2681-58a7-4050-9dd0-3b0d77bdde6c\") " Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.303007 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a0e2681-58a7-4050-9dd0-3b0d77bdde6c-utilities\") pod \"9a0e2681-58a7-4050-9dd0-3b0d77bdde6c\" (UID: \"9a0e2681-58a7-4050-9dd0-3b0d77bdde6c\") " Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.303086 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkfqb\" (UniqueName: \"kubernetes.io/projected/9a0e2681-58a7-4050-9dd0-3b0d77bdde6c-kube-api-access-lkfqb\") pod \"9a0e2681-58a7-4050-9dd0-3b0d77bdde6c\" (UID: \"9a0e2681-58a7-4050-9dd0-3b0d77bdde6c\") " Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.303690 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a0e2681-58a7-4050-9dd0-3b0d77bdde6c-utilities" (OuterVolumeSpecName: "utilities") pod "9a0e2681-58a7-4050-9dd0-3b0d77bdde6c" (UID: "9a0e2681-58a7-4050-9dd0-3b0d77bdde6c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.308133 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a0e2681-58a7-4050-9dd0-3b0d77bdde6c-kube-api-access-lkfqb" (OuterVolumeSpecName: "kube-api-access-lkfqb") pod "9a0e2681-58a7-4050-9dd0-3b0d77bdde6c" (UID: "9a0e2681-58a7-4050-9dd0-3b0d77bdde6c"). InnerVolumeSpecName "kube-api-access-lkfqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.360260 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a0e2681-58a7-4050-9dd0-3b0d77bdde6c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9a0e2681-58a7-4050-9dd0-3b0d77bdde6c" (UID: "9a0e2681-58a7-4050-9dd0-3b0d77bdde6c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.380989 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lq9jn"] Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.404340 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a0e2681-58a7-4050-9dd0-3b0d77bdde6c-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.404369 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkfqb\" (UniqueName: \"kubernetes.io/projected/9a0e2681-58a7-4050-9dd0-3b0d77bdde6c-kube-api-access-lkfqb\") on node \"crc\" DevicePath \"\"" Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.404379 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a0e2681-58a7-4050-9dd0-3b0d77bdde6c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.695049 4775 generic.go:334] "Generic (PLEG): container finished" podID="9a0e2681-58a7-4050-9dd0-3b0d77bdde6c" containerID="344024f9787414d7f0930477d4a12f685db8d20254b2af6053620d5037722c54" exitCode=0 Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.695133 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dx9cv" event={"ID":"9a0e2681-58a7-4050-9dd0-3b0d77bdde6c","Type":"ContainerDied","Data":"344024f9787414d7f0930477d4a12f685db8d20254b2af6053620d5037722c54"} Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.695164 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dx9cv" Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.695217 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dx9cv" event={"ID":"9a0e2681-58a7-4050-9dd0-3b0d77bdde6c","Type":"ContainerDied","Data":"22f3073770a1d6de80d758482e7424eaebf2c01e4873d8bad8fa60b2ece1d9e7"} Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.695247 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lq9jn" podUID="5820a548-636b-4a69-b8d6-b947ee11e3fd" containerName="registry-server" containerID="cri-o://b69437c4162f77acf88b1d79a4f540a54a2a84bc513d0ed39faac3250e860c26" gracePeriod=2 Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.695253 4775 scope.go:117] "RemoveContainer" containerID="344024f9787414d7f0930477d4a12f685db8d20254b2af6053620d5037722c54" Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.740038 4775 scope.go:117] "RemoveContainer" containerID="5ecdadc895819a7d51d81998b7771e7fd9d02ea2237f791eeda62b2bfd242c2e" Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.747616 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dx9cv"] Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.747673 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dx9cv"] Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.831449 4775 scope.go:117] "RemoveContainer" containerID="b1a958b7ba0a68879426427846a7151029fe0ce0287070b144b203617336347b" Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.870542 4775 scope.go:117] "RemoveContainer" containerID="344024f9787414d7f0930477d4a12f685db8d20254b2af6053620d5037722c54" Jan 23 14:45:35 crc kubenswrapper[4775]: E0123 14:45:35.871265 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"344024f9787414d7f0930477d4a12f685db8d20254b2af6053620d5037722c54\": container with ID starting with 344024f9787414d7f0930477d4a12f685db8d20254b2af6053620d5037722c54 not found: ID does not exist" containerID="344024f9787414d7f0930477d4a12f685db8d20254b2af6053620d5037722c54" Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.871326 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"344024f9787414d7f0930477d4a12f685db8d20254b2af6053620d5037722c54"} err="failed to get container status \"344024f9787414d7f0930477d4a12f685db8d20254b2af6053620d5037722c54\": rpc error: code = NotFound desc = could not find container \"344024f9787414d7f0930477d4a12f685db8d20254b2af6053620d5037722c54\": container with ID starting with 344024f9787414d7f0930477d4a12f685db8d20254b2af6053620d5037722c54 not found: ID does not exist" Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.871346 4775 scope.go:117] "RemoveContainer" containerID="5ecdadc895819a7d51d81998b7771e7fd9d02ea2237f791eeda62b2bfd242c2e" Jan 23 14:45:35 crc kubenswrapper[4775]: E0123 14:45:35.874615 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ecdadc895819a7d51d81998b7771e7fd9d02ea2237f791eeda62b2bfd242c2e\": container with ID starting with 5ecdadc895819a7d51d81998b7771e7fd9d02ea2237f791eeda62b2bfd242c2e not found: ID does not exist" containerID="5ecdadc895819a7d51d81998b7771e7fd9d02ea2237f791eeda62b2bfd242c2e" Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.874656 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ecdadc895819a7d51d81998b7771e7fd9d02ea2237f791eeda62b2bfd242c2e"} err="failed to get container status \"5ecdadc895819a7d51d81998b7771e7fd9d02ea2237f791eeda62b2bfd242c2e\": rpc error: code = NotFound desc = could not find container \"5ecdadc895819a7d51d81998b7771e7fd9d02ea2237f791eeda62b2bfd242c2e\": container with ID starting with 5ecdadc895819a7d51d81998b7771e7fd9d02ea2237f791eeda62b2bfd242c2e not found: ID does not exist" Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.874677 4775 scope.go:117] "RemoveContainer" containerID="b1a958b7ba0a68879426427846a7151029fe0ce0287070b144b203617336347b" Jan 23 14:45:35 crc kubenswrapper[4775]: E0123 14:45:35.874986 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1a958b7ba0a68879426427846a7151029fe0ce0287070b144b203617336347b\": container with ID starting with b1a958b7ba0a68879426427846a7151029fe0ce0287070b144b203617336347b not found: ID does not exist" containerID="b1a958b7ba0a68879426427846a7151029fe0ce0287070b144b203617336347b" Jan 23 14:45:35 crc kubenswrapper[4775]: I0123 14:45:35.875072 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1a958b7ba0a68879426427846a7151029fe0ce0287070b144b203617336347b"} err="failed to get container status \"b1a958b7ba0a68879426427846a7151029fe0ce0287070b144b203617336347b\": rpc error: code = NotFound desc = could not find container \"b1a958b7ba0a68879426427846a7151029fe0ce0287070b144b203617336347b\": container with ID starting with b1a958b7ba0a68879426427846a7151029fe0ce0287070b144b203617336347b not found: ID does not exist" Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.103143 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lq9jn" Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.215943 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94wnx\" (UniqueName: \"kubernetes.io/projected/5820a548-636b-4a69-b8d6-b947ee11e3fd-kube-api-access-94wnx\") pod \"5820a548-636b-4a69-b8d6-b947ee11e3fd\" (UID: \"5820a548-636b-4a69-b8d6-b947ee11e3fd\") " Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.216065 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5820a548-636b-4a69-b8d6-b947ee11e3fd-catalog-content\") pod \"5820a548-636b-4a69-b8d6-b947ee11e3fd\" (UID: \"5820a548-636b-4a69-b8d6-b947ee11e3fd\") " Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.216140 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5820a548-636b-4a69-b8d6-b947ee11e3fd-utilities\") pod \"5820a548-636b-4a69-b8d6-b947ee11e3fd\" (UID: \"5820a548-636b-4a69-b8d6-b947ee11e3fd\") " Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.217225 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5820a548-636b-4a69-b8d6-b947ee11e3fd-utilities" (OuterVolumeSpecName: "utilities") pod "5820a548-636b-4a69-b8d6-b947ee11e3fd" (UID: "5820a548-636b-4a69-b8d6-b947ee11e3fd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.222981 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5820a548-636b-4a69-b8d6-b947ee11e3fd-kube-api-access-94wnx" (OuterVolumeSpecName: "kube-api-access-94wnx") pod "5820a548-636b-4a69-b8d6-b947ee11e3fd" (UID: "5820a548-636b-4a69-b8d6-b947ee11e3fd"). InnerVolumeSpecName "kube-api-access-94wnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.273071 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5820a548-636b-4a69-b8d6-b947ee11e3fd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5820a548-636b-4a69-b8d6-b947ee11e3fd" (UID: "5820a548-636b-4a69-b8d6-b947ee11e3fd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.317618 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5820a548-636b-4a69-b8d6-b947ee11e3fd-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.317648 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94wnx\" (UniqueName: \"kubernetes.io/projected/5820a548-636b-4a69-b8d6-b947ee11e3fd-kube-api-access-94wnx\") on node \"crc\" DevicePath \"\"" Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.317661 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5820a548-636b-4a69-b8d6-b947ee11e3fd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.706650 4775 generic.go:334] "Generic (PLEG): container finished" podID="5820a548-636b-4a69-b8d6-b947ee11e3fd" containerID="b69437c4162f77acf88b1d79a4f540a54a2a84bc513d0ed39faac3250e860c26" exitCode=0 Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.706724 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lq9jn" Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.706743 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lq9jn" event={"ID":"5820a548-636b-4a69-b8d6-b947ee11e3fd","Type":"ContainerDied","Data":"b69437c4162f77acf88b1d79a4f540a54a2a84bc513d0ed39faac3250e860c26"} Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.707161 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lq9jn" event={"ID":"5820a548-636b-4a69-b8d6-b947ee11e3fd","Type":"ContainerDied","Data":"3994761e9ce661e4a34641df7fae6257581617303ab3ad370a636bae72fa58e7"} Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.707186 4775 scope.go:117] "RemoveContainer" containerID="b69437c4162f77acf88b1d79a4f540a54a2a84bc513d0ed39faac3250e860c26" Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.728406 4775 scope.go:117] "RemoveContainer" containerID="66381161ea8e3e8b4f98c07e994e28deb923fe56808c28c223ee02a3a51be123" Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.749106 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lq9jn"] Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.760335 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lq9jn"] Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.763718 4775 scope.go:117] "RemoveContainer" containerID="4520cae67722951660081decece3745c0abd896e4df9ffd0b009c00188cac1ea" Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.789871 4775 scope.go:117] "RemoveContainer" containerID="b69437c4162f77acf88b1d79a4f540a54a2a84bc513d0ed39faac3250e860c26" Jan 23 14:45:36 crc kubenswrapper[4775]: E0123 14:45:36.790384 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b69437c4162f77acf88b1d79a4f540a54a2a84bc513d0ed39faac3250e860c26\": container with ID starting with b69437c4162f77acf88b1d79a4f540a54a2a84bc513d0ed39faac3250e860c26 not found: ID does not exist" containerID="b69437c4162f77acf88b1d79a4f540a54a2a84bc513d0ed39faac3250e860c26" Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.790442 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b69437c4162f77acf88b1d79a4f540a54a2a84bc513d0ed39faac3250e860c26"} err="failed to get container status \"b69437c4162f77acf88b1d79a4f540a54a2a84bc513d0ed39faac3250e860c26\": rpc error: code = NotFound desc = could not find container \"b69437c4162f77acf88b1d79a4f540a54a2a84bc513d0ed39faac3250e860c26\": container with ID starting with b69437c4162f77acf88b1d79a4f540a54a2a84bc513d0ed39faac3250e860c26 not found: ID does not exist" Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.790485 4775 scope.go:117] "RemoveContainer" containerID="66381161ea8e3e8b4f98c07e994e28deb923fe56808c28c223ee02a3a51be123" Jan 23 14:45:36 crc kubenswrapper[4775]: E0123 14:45:36.791044 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66381161ea8e3e8b4f98c07e994e28deb923fe56808c28c223ee02a3a51be123\": container with ID starting with 66381161ea8e3e8b4f98c07e994e28deb923fe56808c28c223ee02a3a51be123 not found: ID does not exist" containerID="66381161ea8e3e8b4f98c07e994e28deb923fe56808c28c223ee02a3a51be123" Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.791107 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66381161ea8e3e8b4f98c07e994e28deb923fe56808c28c223ee02a3a51be123"} err="failed to get container status \"66381161ea8e3e8b4f98c07e994e28deb923fe56808c28c223ee02a3a51be123\": rpc error: code = NotFound desc = could not find container \"66381161ea8e3e8b4f98c07e994e28deb923fe56808c28c223ee02a3a51be123\": container with ID starting with 66381161ea8e3e8b4f98c07e994e28deb923fe56808c28c223ee02a3a51be123 not found: ID does not exist" Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.791141 4775 scope.go:117] "RemoveContainer" containerID="4520cae67722951660081decece3745c0abd896e4df9ffd0b009c00188cac1ea" Jan 23 14:45:36 crc kubenswrapper[4775]: E0123 14:45:36.791707 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4520cae67722951660081decece3745c0abd896e4df9ffd0b009c00188cac1ea\": container with ID starting with 4520cae67722951660081decece3745c0abd896e4df9ffd0b009c00188cac1ea not found: ID does not exist" containerID="4520cae67722951660081decece3745c0abd896e4df9ffd0b009c00188cac1ea" Jan 23 14:45:36 crc kubenswrapper[4775]: I0123 14:45:36.791738 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4520cae67722951660081decece3745c0abd896e4df9ffd0b009c00188cac1ea"} err="failed to get container status \"4520cae67722951660081decece3745c0abd896e4df9ffd0b009c00188cac1ea\": rpc error: code = NotFound desc = could not find container \"4520cae67722951660081decece3745c0abd896e4df9ffd0b009c00188cac1ea\": container with ID starting with 4520cae67722951660081decece3745c0abd896e4df9ffd0b009c00188cac1ea not found: ID does not exist" Jan 23 14:45:37 crc kubenswrapper[4775]: I0123 14:45:37.729001 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5820a548-636b-4a69-b8d6-b947ee11e3fd" path="/var/lib/kubelet/pods/5820a548-636b-4a69-b8d6-b947ee11e3fd/volumes" Jan 23 14:45:37 crc kubenswrapper[4775]: I0123 14:45:37.730221 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a0e2681-58a7-4050-9dd0-3b0d77bdde6c" path="/var/lib/kubelet/pods/9a0e2681-58a7-4050-9dd0-3b0d77bdde6c/volumes" Jan 23 14:45:45 crc kubenswrapper[4775]: I0123 14:45:45.714733 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:45:45 crc kubenswrapper[4775]: E0123 14:45:45.715730 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:45:49 crc kubenswrapper[4775]: I0123 14:45:49.533533 4775 scope.go:117] "RemoveContainer" containerID="bd180f88acb55bc6174b54cab0740792964b942d82c9bf0cffd2ac1751bececd" Jan 23 14:45:50 crc kubenswrapper[4775]: I0123 14:45:50.074661 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-db-create-bfq79"] Jan 23 14:45:50 crc kubenswrapper[4775]: I0123 14:45:50.080583 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-574a-account-create-update-mjhg8"] Jan 23 14:45:50 crc kubenswrapper[4775]: I0123 14:45:50.096238 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-f1e1-account-create-update-8ng7h"] Jan 23 14:45:50 crc kubenswrapper[4775]: I0123 14:45:50.106832 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-82jzj"] Jan 23 14:45:50 crc kubenswrapper[4775]: I0123 14:45:50.115169 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-d8kgs"] Jan 23 14:45:50 crc kubenswrapper[4775]: I0123 14:45:50.120438 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-db-create-bfq79"] Jan 23 14:45:50 crc kubenswrapper[4775]: I0123 14:45:50.125388 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-574a-account-create-update-mjhg8"] Jan 23 14:45:50 crc kubenswrapper[4775]: I0123 14:45:50.130421 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-f1e1-account-create-update-8ng7h"] Jan 23 14:45:50 crc kubenswrapper[4775]: I0123 14:45:50.137331 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell0-db-create-d8kgs"] Jan 23 14:45:50 crc kubenswrapper[4775]: I0123 14:45:50.147196 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-api-31e4-account-create-update-2rd2s"] Jan 23 14:45:50 crc kubenswrapper[4775]: I0123 14:45:50.156871 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-cell1-db-create-82jzj"] Jan 23 14:45:50 crc kubenswrapper[4775]: I0123 14:45:50.163415 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-api-31e4-account-create-update-2rd2s"] Jan 23 14:45:51 crc kubenswrapper[4775]: I0123 14:45:51.725600 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15c2fb30-3be5-4e47-b2d3-8fbd54665494" path="/var/lib/kubelet/pods/15c2fb30-3be5-4e47-b2d3-8fbd54665494/volumes" Jan 23 14:45:51 crc kubenswrapper[4775]: I0123 14:45:51.727365 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48eb2aff-1769-415f-b284-8d0cbf32a4e9" path="/var/lib/kubelet/pods/48eb2aff-1769-415f-b284-8d0cbf32a4e9/volumes" Jan 23 14:45:51 crc kubenswrapper[4775]: I0123 14:45:51.728458 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="603674a6-1055-4e27-b370-2b57865ebc55" path="/var/lib/kubelet/pods/603674a6-1055-4e27-b370-2b57865ebc55/volumes" Jan 23 14:45:51 crc kubenswrapper[4775]: I0123 14:45:51.729609 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="891c1a15-7b44-4c8f-be11-d06333a1d0d1" path="/var/lib/kubelet/pods/891c1a15-7b44-4c8f-be11-d06333a1d0d1/volumes" Jan 23 14:45:51 crc kubenswrapper[4775]: I0123 14:45:51.731598 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95df8848-8035-4302-9689-db060f7d4148" path="/var/lib/kubelet/pods/95df8848-8035-4302-9689-db060f7d4148/volumes" Jan 23 14:45:51 crc kubenswrapper[4775]: I0123 14:45:51.732727 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98b564d3-5399-47b6-9397-4c3b006f9e13" path="/var/lib/kubelet/pods/98b564d3-5399-47b6-9397-4c3b006f9e13/volumes" Jan 23 14:45:59 crc kubenswrapper[4775]: I0123 14:45:59.034526 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8"] Jan 23 14:45:59 crc kubenswrapper[4775]: I0123 14:45:59.050176 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-conductor-db-sync-2l6n8"] Jan 23 14:45:59 crc kubenswrapper[4775]: I0123 14:45:59.714309 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:45:59 crc kubenswrapper[4775]: I0123 14:45:59.746496 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12f70e17-ec31-43fc-ac56-d1742f962de5" path="/var/lib/kubelet/pods/12f70e17-ec31-43fc-ac56-d1742f962de5/volumes" Jan 23 14:45:59 crc kubenswrapper[4775]: I0123 14:45:59.956593 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerStarted","Data":"fb9925329613a52dcbc6411915216316f974c31f7e89dd07fdacbd9dd078559f"} Jan 23 14:46:17 crc kubenswrapper[4775]: I0123 14:46:17.037641 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc"] Jan 23 14:46:17 crc kubenswrapper[4775]: I0123 14:46:17.042686 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r"] Jan 23 14:46:17 crc kubenswrapper[4775]: I0123 14:46:17.048287 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell0-cell-mapping-qxjlc"] Jan 23 14:46:17 crc kubenswrapper[4775]: I0123 14:46:17.053021 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-conductor-db-sync-sjz5r"] Jan 23 14:46:17 crc kubenswrapper[4775]: I0123 14:46:17.721983 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="263d2fcc-c533-4291-8e78-d8e9a2ee2894" path="/var/lib/kubelet/pods/263d2fcc-c533-4291-8e78-d8e9a2ee2894/volumes" Jan 23 14:46:17 crc kubenswrapper[4775]: I0123 14:46:17.722735 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a194a858-8c18-41e1-9a10-428397753ece" path="/var/lib/kubelet/pods/a194a858-8c18-41e1-9a10-428397753ece/volumes" Jan 23 14:46:17 crc kubenswrapper[4775]: I0123 14:46:17.832177 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt_100f3a0b-4d11-495f-a6fe-57b196820ee3/util/0.log" Jan 23 14:46:17 crc kubenswrapper[4775]: I0123 14:46:17.975785 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt_100f3a0b-4d11-495f-a6fe-57b196820ee3/util/0.log" Jan 23 14:46:17 crc kubenswrapper[4775]: I0123 14:46:17.978341 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt_100f3a0b-4d11-495f-a6fe-57b196820ee3/pull/0.log" Jan 23 14:46:18 crc kubenswrapper[4775]: I0123 14:46:18.051543 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt_100f3a0b-4d11-495f-a6fe-57b196820ee3/pull/0.log" Jan 23 14:46:18 crc kubenswrapper[4775]: I0123 14:46:18.191208 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt_100f3a0b-4d11-495f-a6fe-57b196820ee3/util/0.log" Jan 23 14:46:18 crc kubenswrapper[4775]: I0123 14:46:18.201034 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt_100f3a0b-4d11-495f-a6fe-57b196820ee3/pull/0.log" Jan 23 14:46:18 crc kubenswrapper[4775]: I0123 14:46:18.218508 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_0709d498f83e182ecbe371954b0a809c6be29b89e2a4c9b58ce895f728rw7bt_100f3a0b-4d11-495f-a6fe-57b196820ee3/extract/0.log" Jan 23 14:46:18 crc kubenswrapper[4775]: I0123 14:46:18.366051 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc_a7025f67-434a-4dba-9b3a-e3b809f5c614/util/0.log" Jan 23 14:46:18 crc kubenswrapper[4775]: I0123 14:46:18.524862 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc_a7025f67-434a-4dba-9b3a-e3b809f5c614/pull/0.log" Jan 23 14:46:18 crc kubenswrapper[4775]: I0123 14:46:18.540626 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc_a7025f67-434a-4dba-9b3a-e3b809f5c614/pull/0.log" Jan 23 14:46:18 crc kubenswrapper[4775]: I0123 14:46:18.545989 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc_a7025f67-434a-4dba-9b3a-e3b809f5c614/util/0.log" Jan 23 14:46:18 crc kubenswrapper[4775]: I0123 14:46:18.712856 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc_a7025f67-434a-4dba-9b3a-e3b809f5c614/util/0.log" Jan 23 14:46:18 crc kubenswrapper[4775]: I0123 14:46:18.725623 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc_a7025f67-434a-4dba-9b3a-e3b809f5c614/extract/0.log" Jan 23 14:46:18 crc kubenswrapper[4775]: I0123 14:46:18.762478 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cc9dfa20d29dd4b0e9e23f5076cc42371c4a98769c3e308fa76fa6054gs2pc_a7025f67-434a-4dba-9b3a-e3b809f5c614/pull/0.log" Jan 23 14:46:18 crc kubenswrapper[4775]: I0123 14:46:18.902384 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-pk9jd_56ee00d0-c0f0-442a-bf4a-7335b62c1c4e/manager/0.log" Jan 23 14:46:18 crc kubenswrapper[4775]: I0123 14:46:18.928583 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-69cf5d4557-dz7ft_9ce79c2a-2c52-48de-80a6-887d592578d3/manager/0.log" Jan 23 14:46:19 crc kubenswrapper[4775]: I0123 14:46:19.093908 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-ppxmc_352223d5-fa0a-43df-8bad-0eaa9b6b439d/manager/0.log" Jan 23 14:46:19 crc kubenswrapper[4775]: I0123 14:46:19.103127 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-jq89z_64bae0eb-d703-4058-a545-b42d62045b90/manager/0.log" Jan 23 14:46:19 crc kubenswrapper[4775]: I0123 14:46:19.271651 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-xrmvt_841fb528-61a8-445e-a135-be26295bc975/manager/0.log" Jan 23 14:46:19 crc kubenswrapper[4775]: I0123 14:46:19.299469 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-sg9x5_d9e69fcf-58c9-45fe-a291-4628c8219e10/manager/0.log" Jan 23 14:46:19 crc kubenswrapper[4775]: I0123 14:46:19.464370 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-f7lm6_d98bebb2-a42a-45a6-b452-a82ce1f62896/manager/0.log" Jan 23 14:46:19 crc kubenswrapper[4775]: I0123 14:46:19.503204 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-58749ffdfb-mcrj4_5a65a9ef-28c7-46ae-826d-5546af1103a5/manager/0.log" Jan 23 14:46:19 crc kubenswrapper[4775]: I0123 14:46:19.693024 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-bgbpj_0784c928-e0c5-4afb-99cb-4f1f96820a14/manager/0.log" Jan 23 14:46:19 crc kubenswrapper[4775]: I0123 14:46:19.723415 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-pfdc5_853c6152-25bf-4374-a941-f9cd4202c87f/manager/0.log" Jan 23 14:46:19 crc kubenswrapper[4775]: I0123 14:46:19.899475 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-jk8vg_bb6ce8ae-8d3f-4988-9386-6a20487f8ae9/manager/0.log" Jan 23 14:46:19 crc kubenswrapper[4775]: I0123 14:46:19.910297 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-sxkzh_9710b785-e422-4aca-88e8-e88d26d4e724/manager/0.log" Jan 23 14:46:20 crc kubenswrapper[4775]: I0123 14:46:20.075614 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-index-x4gqk_78f375c8-5d62-4cbb-b348-8205d476d603/registry-server/0.log" Jan 23 14:46:20 crc kubenswrapper[4775]: I0123 14:46:20.311696 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7bd9774b6-vl7m5_a07598ff-60cc-482e-a551-af751575709c/manager/0.log" Jan 23 14:46:20 crc kubenswrapper[4775]: I0123 14:46:20.353640 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7c5fcc4cc6-wwr78_92377252-2e4d-48bb-95ea-724a4ff5c788/manager/0.log" Jan 23 14:46:20 crc kubenswrapper[4775]: I0123 14:46:20.441744 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854zk48c_44a963d8-d403-42d5-acd2-a0379f07db51/manager/0.log" Jan 23 14:46:20 crc kubenswrapper[4775]: I0123 14:46:20.625605 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-5czdz_a0ddc210-ca29-42e4-a4c2-a07881434fed/registry-server/0.log" Jan 23 14:46:20 crc kubenswrapper[4775]: I0123 14:46:20.735203 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-bb8f85db-bkqk9_313b5382-60cf-4627-8ba7-a091fc457989/manager/0.log" Jan 23 14:46:20 crc kubenswrapper[4775]: I0123 14:46:20.764211 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-xst4r_3d7c7bc6-5124-4cd4-a406-448ca94ba640/manager/0.log" Jan 23 14:46:20 crc kubenswrapper[4775]: I0123 14:46:20.918209 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5d646b7d76-n4k5s_072b9a9d-8a08-454c-b1b6-628fcdcc91df/manager/0.log" Jan 23 14:46:20 crc kubenswrapper[4775]: I0123 14:46:20.975480 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-2lhsf_f9da51f1-a035-44b8-9391-0d6018a84c61/operator/0.log" Jan 23 14:46:21 crc kubenswrapper[4775]: I0123 14:46:21.087164 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-nqw74_ecef6080-ea2c-43f4-8ffa-da2ceb59369d/manager/0.log" Jan 23 14:46:21 crc kubenswrapper[4775]: I0123 14:46:21.125393 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-jrhlh_91da96b4-921a-4b88-9804-55745989e08b/manager/0.log" Jan 23 14:46:21 crc kubenswrapper[4775]: I0123 14:46:21.172396 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-xtmz8_9f9597bf-12a1-4204-ac57-37c4c0189687/manager/0.log" Jan 23 14:46:21 crc kubenswrapper[4775]: I0123 14:46:21.321352 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6d9458688d-v8dw9_272dcd84-1bb6-42cb-8c8e-6851f9f031de/manager/0.log" Jan 23 14:46:31 crc kubenswrapper[4775]: I0123 14:46:31.038168 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8"] Jan 23 14:46:31 crc kubenswrapper[4775]: I0123 14:46:31.046051 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-mapping-4gfb8"] Jan 23 14:46:31 crc kubenswrapper[4775]: I0123 14:46:31.729358 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ef19dc5-1d78-479c-8220-340c46c44bdf" path="/var/lib/kubelet/pods/3ef19dc5-1d78-479c-8220-340c46c44bdf/volumes" Jan 23 14:46:41 crc kubenswrapper[4775]: I0123 14:46:41.712573 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-psxgx_13e16abe-9325-4638-8b20-7195b7af8e68/control-plane-machine-set-operator/0.log" Jan 23 14:46:41 crc kubenswrapper[4775]: I0123 14:46:41.884062 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-svb79_85a9044b-9089-4a6a-87e6-06372c531aa9/kube-rbac-proxy/0.log" Jan 23 14:46:41 crc kubenswrapper[4775]: I0123 14:46:41.918074 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-svb79_85a9044b-9089-4a6a-87e6-06372c531aa9/machine-api-operator/0.log" Jan 23 14:46:49 crc kubenswrapper[4775]: I0123 14:46:49.680944 4775 scope.go:117] "RemoveContainer" containerID="3b2dfb102f46ee1631a2160c9d3d2f454d0244cb082c8318b072e1947bb67ce1" Jan 23 14:46:49 crc kubenswrapper[4775]: I0123 14:46:49.707939 4775 scope.go:117] "RemoveContainer" containerID="0eff9d8eee28ce912e21c7c4f7871ae916bc9d5ed3ea4fca779e82c2788bb4b7" Jan 23 14:46:49 crc kubenswrapper[4775]: I0123 14:46:49.757483 4775 scope.go:117] "RemoveContainer" containerID="fad204a9922c6b587aa30b8277005173345d455f94c99d5d275be428107c4c7c" Jan 23 14:46:49 crc kubenswrapper[4775]: I0123 14:46:49.797086 4775 scope.go:117] "RemoveContainer" containerID="8d06597f807e3e42864d38d837f7984e31d4d87d055c7ea7bb57e3bf624b9c80" Jan 23 14:46:49 crc kubenswrapper[4775]: I0123 14:46:49.844969 4775 scope.go:117] "RemoveContainer" containerID="f75e094c5540e8cb925dd39cbb448ad5adf94fb3b2f88a9a2855acad38942424" Jan 23 14:46:49 crc kubenswrapper[4775]: I0123 14:46:49.863612 4775 scope.go:117] "RemoveContainer" containerID="5022709a82d85e5efe22de467daeee972c2edbb45f0956772656b5f2da7c871d" Jan 23 14:46:49 crc kubenswrapper[4775]: I0123 14:46:49.891185 4775 scope.go:117] "RemoveContainer" containerID="7866fa95041ef01597a04bb378890e5ad494e3f63a1535140905408dc45663a9" Jan 23 14:46:49 crc kubenswrapper[4775]: I0123 14:46:49.937965 4775 scope.go:117] "RemoveContainer" containerID="10368cb00c51c9c09d42987a704f6c282da205a1023667df771174ceb21b2b54" Jan 23 14:46:49 crc kubenswrapper[4775]: I0123 14:46:49.984214 4775 scope.go:117] "RemoveContainer" containerID="9181f36c62e9c5f12ea45cd0ada22e77d0a8f8e6dddcf6191c606aedb0bccd71" Jan 23 14:46:50 crc kubenswrapper[4775]: I0123 14:46:50.010690 4775 scope.go:117] "RemoveContainer" containerID="60accca565e62d33f56b52cced99fb327dbdd19ac23aa7c351971c0a1d7d06f7" Jan 23 14:46:56 crc kubenswrapper[4775]: I0123 14:46:56.934425 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-dzfhf_2a26d984-5abe-44ce-ad1e-25842b8f7e51/cert-manager-controller/0.log" Jan 23 14:46:57 crc kubenswrapper[4775]: I0123 14:46:57.059284 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-qsmln_620134d3-d230-4c5b-8aaf-4213bcba307c/cert-manager-cainjector/0.log" Jan 23 14:46:57 crc kubenswrapper[4775]: I0123 14:46:57.100358 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-w6lsn_3613a1b4-54b6-4a47-988a-a6624d530636/cert-manager-webhook/0.log" Jan 23 14:47:11 crc kubenswrapper[4775]: I0123 14:47:11.707223 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-w5xfs_e932364d-5f85-43fd-ba05-f4e0934482c2/nmstate-console-plugin/0.log" Jan 23 14:47:11 crc kubenswrapper[4775]: I0123 14:47:11.928767 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-wmglj_18100557-00ef-4de8-9a7f-df953190a9c6/nmstate-handler/0.log" Jan 23 14:47:12 crc kubenswrapper[4775]: I0123 14:47:12.035640 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-p7nxk_97726a36-cf4b-4688-b028-448734bd8c23/kube-rbac-proxy/0.log" Jan 23 14:47:12 crc kubenswrapper[4775]: I0123 14:47:12.046305 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-p7nxk_97726a36-cf4b-4688-b028-448734bd8c23/nmstate-metrics/0.log" Jan 23 14:47:12 crc kubenswrapper[4775]: I0123 14:47:12.119338 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-gq778_ebe0482d-2988-4f4d-929f-4c2980e19cf3/nmstate-operator/0.log" Jan 23 14:47:12 crc kubenswrapper[4775]: I0123 14:47:12.221663 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-rnbff_6932e29c-8eac-4e0f-9516-c2e922655cbc/nmstate-webhook/0.log" Jan 23 14:47:44 crc kubenswrapper[4775]: I0123 14:47:44.072285 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-7qz58_7755c0c4-4e11-47c6-955d-453408fd4316/kube-rbac-proxy/0.log" Jan 23 14:47:44 crc kubenswrapper[4775]: I0123 14:47:44.167339 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-7qz58_7755c0c4-4e11-47c6-955d-453408fd4316/controller/0.log" Jan 23 14:47:44 crc kubenswrapper[4775]: I0123 14:47:44.272402 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv6fp_6831fcdc-628b-4bef-bf9c-5e24b63f9196/cp-frr-files/0.log" Jan 23 14:47:44 crc kubenswrapper[4775]: I0123 14:47:44.404290 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv6fp_6831fcdc-628b-4bef-bf9c-5e24b63f9196/cp-frr-files/0.log" Jan 23 14:47:44 crc kubenswrapper[4775]: I0123 14:47:44.416000 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv6fp_6831fcdc-628b-4bef-bf9c-5e24b63f9196/cp-reloader/0.log" Jan 23 14:47:44 crc kubenswrapper[4775]: I0123 14:47:44.448883 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv6fp_6831fcdc-628b-4bef-bf9c-5e24b63f9196/cp-reloader/0.log" Jan 23 14:47:44 crc kubenswrapper[4775]: I0123 14:47:44.455040 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv6fp_6831fcdc-628b-4bef-bf9c-5e24b63f9196/cp-metrics/0.log" Jan 23 14:47:44 crc kubenswrapper[4775]: I0123 14:47:44.597392 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv6fp_6831fcdc-628b-4bef-bf9c-5e24b63f9196/cp-metrics/0.log" Jan 23 14:47:44 crc kubenswrapper[4775]: I0123 14:47:44.622032 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv6fp_6831fcdc-628b-4bef-bf9c-5e24b63f9196/cp-frr-files/0.log" Jan 23 14:47:44 crc kubenswrapper[4775]: I0123 14:47:44.631248 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv6fp_6831fcdc-628b-4bef-bf9c-5e24b63f9196/cp-metrics/0.log" Jan 23 14:47:44 crc kubenswrapper[4775]: I0123 14:47:44.644080 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv6fp_6831fcdc-628b-4bef-bf9c-5e24b63f9196/cp-reloader/0.log" Jan 23 14:47:44 crc kubenswrapper[4775]: I0123 14:47:44.830691 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv6fp_6831fcdc-628b-4bef-bf9c-5e24b63f9196/cp-frr-files/0.log" Jan 23 14:47:44 crc kubenswrapper[4775]: I0123 14:47:44.850987 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv6fp_6831fcdc-628b-4bef-bf9c-5e24b63f9196/cp-reloader/0.log" Jan 23 14:47:44 crc kubenswrapper[4775]: I0123 14:47:44.851879 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv6fp_6831fcdc-628b-4bef-bf9c-5e24b63f9196/cp-metrics/0.log" Jan 23 14:47:44 crc kubenswrapper[4775]: I0123 14:47:44.861119 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv6fp_6831fcdc-628b-4bef-bf9c-5e24b63f9196/controller/0.log" Jan 23 14:47:45 crc kubenswrapper[4775]: I0123 14:47:45.063148 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv6fp_6831fcdc-628b-4bef-bf9c-5e24b63f9196/kube-rbac-proxy-frr/0.log" Jan 23 14:47:45 crc kubenswrapper[4775]: I0123 14:47:45.063928 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv6fp_6831fcdc-628b-4bef-bf9c-5e24b63f9196/kube-rbac-proxy/0.log" Jan 23 14:47:45 crc kubenswrapper[4775]: I0123 14:47:45.093599 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv6fp_6831fcdc-628b-4bef-bf9c-5e24b63f9196/frr-metrics/0.log" Jan 23 14:47:45 crc kubenswrapper[4775]: I0123 14:47:45.238558 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv6fp_6831fcdc-628b-4bef-bf9c-5e24b63f9196/reloader/0.log" Jan 23 14:47:45 crc kubenswrapper[4775]: I0123 14:47:45.341969 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-p49hv_9eb8e4c8-06ce-427a-9b91-7b77d4e8a783/frr-k8s-webhook-server/0.log" Jan 23 14:47:45 crc kubenswrapper[4775]: I0123 14:47:45.465512 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-558d9b5f8-fgs57_838b952f-6d05-4955-82fd-9cf8a017c5b5/manager/0.log" Jan 23 14:47:45 crc kubenswrapper[4775]: I0123 14:47:45.657860 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-699f5544f9-66nkz_fa6cceac-c1d4-4e7c-9e60-4dd698abc182/webhook-server/0.log" Jan 23 14:47:45 crc kubenswrapper[4775]: I0123 14:47:45.808915 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-x4gxj_9334cd3c-2410-4fbd-8cc1-14edca3afb92/kube-rbac-proxy/0.log" Jan 23 14:47:46 crc kubenswrapper[4775]: I0123 14:47:46.066325 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-x4gxj_9334cd3c-2410-4fbd-8cc1-14edca3afb92/speaker/0.log" Jan 23 14:47:46 crc kubenswrapper[4775]: I0123 14:47:46.166541 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pv6fp_6831fcdc-628b-4bef-bf9c-5e24b63f9196/frr/0.log" Jan 23 14:48:04 crc kubenswrapper[4775]: I0123 14:48:04.153599 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_keystone-7d978f-gdlmv_898c8554-82c6-4777-8869-15981e356a84/keystone-api/0.log" Jan 23 14:48:04 crc kubenswrapper[4775]: I0123 14:48:04.354557 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-api-0_56066bf2-4408-46e5-8df0-6ce62447bf2a/nova-kuttl-api-api/0.log" Jan 23 14:48:04 crc kubenswrapper[4775]: I0123 14:48:04.604876 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-api-0_56066bf2-4408-46e5-8df0-6ce62447bf2a/nova-kuttl-api-log/0.log" Jan 23 14:48:04 crc kubenswrapper[4775]: I0123 14:48:04.662404 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell0-conductor-0_fab3b1c6-093c-4891-957c-fad86eb8fd31/nova-kuttl-cell0-conductor-conductor/0.log" Jan 23 14:48:04 crc kubenswrapper[4775]: I0123 14:48:04.874688 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-conductor-0_1fd448a3-6897-490f-9c92-98590cee53ca/nova-kuttl-cell1-conductor-conductor/0.log" Jan 23 14:48:05 crc kubenswrapper[4775]: I0123 14:48:05.096063 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-cell1-novncproxy-0_cb15b357-f464-4e43-a038-3b9e72455d49/nova-kuttl-cell1-novncproxy-novncproxy/0.log" Jan 23 14:48:05 crc kubenswrapper[4775]: I0123 14:48:05.145506 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-metadata-0_72d0a843-11de-43a6-9c92-6a65a6d406ec/nova-kuttl-metadata-log/0.log" Jan 23 14:48:05 crc kubenswrapper[4775]: I0123 14:48:05.248862 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-metadata-0_72d0a843-11de-43a6-9c92-6a65a6d406ec/nova-kuttl-metadata-metadata/0.log" Jan 23 14:48:05 crc kubenswrapper[4775]: I0123 14:48:05.404938 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_nova-kuttl-scheduler-0_bdfa6b38-3f0a-4f8e-9bd4-ec3907a919f0/nova-kuttl-scheduler-scheduler/0.log" Jan 23 14:48:05 crc kubenswrapper[4775]: I0123 14:48:05.517604 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-cell1-galera-0_481cbe1b-2796-4ad2-a342-3661afa62383/mysql-bootstrap/0.log" Jan 23 14:48:05 crc kubenswrapper[4775]: I0123 14:48:05.734440 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-cell1-galera-0_481cbe1b-2796-4ad2-a342-3661afa62383/mysql-bootstrap/0.log" Jan 23 14:48:05 crc kubenswrapper[4775]: I0123 14:48:05.765127 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-cell1-galera-0_481cbe1b-2796-4ad2-a342-3661afa62383/galera/0.log" Jan 23 14:48:05 crc kubenswrapper[4775]: I0123 14:48:05.942782 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-galera-0_372c512d-5894-49da-ae1e-cb3e54aadacc/mysql-bootstrap/0.log" Jan 23 14:48:06 crc kubenswrapper[4775]: I0123 14:48:06.209450 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-galera-0_372c512d-5894-49da-ae1e-cb3e54aadacc/galera/0.log" Jan 23 14:48:06 crc kubenswrapper[4775]: I0123 14:48:06.252025 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstack-galera-0_372c512d-5894-49da-ae1e-cb3e54aadacc/mysql-bootstrap/0.log" Jan 23 14:48:06 crc kubenswrapper[4775]: I0123 14:48:06.424031 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_openstackclient_76733f2d-491c-45dd-bcf5-1a4423019717/openstackclient/0.log" Jan 23 14:48:06 crc kubenswrapper[4775]: I0123 14:48:06.465601 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_memcached-0_2e1f7aa1-1780-4ccb-b1a5-66b9b279d555/memcached/0.log" Jan 23 14:48:06 crc kubenswrapper[4775]: I0123 14:48:06.476989 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-7787b67bb8-psq7t_6b653824-2e32-431a-8b16-f8687610c0fe/placement-api/0.log" Jan 23 14:48:06 crc kubenswrapper[4775]: I0123 14:48:06.613706 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_placement-7787b67bb8-psq7t_6b653824-2e32-431a-8b16-f8687610c0fe/placement-log/0.log" Jan 23 14:48:06 crc kubenswrapper[4775]: I0123 14:48:06.653918 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-broadcaster-server-0_401a94b6-0628-4cea-b62a-c3229a913d16/setup-container/0.log" Jan 23 14:48:06 crc kubenswrapper[4775]: I0123 14:48:06.809680 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-broadcaster-server-0_401a94b6-0628-4cea-b62a-c3229a913d16/setup-container/0.log" Jan 23 14:48:06 crc kubenswrapper[4775]: I0123 14:48:06.866939 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-broadcaster-server-0_401a94b6-0628-4cea-b62a-c3229a913d16/rabbitmq/0.log" Jan 23 14:48:06 crc kubenswrapper[4775]: I0123 14:48:06.873071 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-cell1-server-0_4b05c189-a694-4cbc-b679-a974e6bf99bc/setup-container/0.log" Jan 23 14:48:07 crc kubenswrapper[4775]: I0123 14:48:07.009398 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-cell1-server-0_4b05c189-a694-4cbc-b679-a974e6bf99bc/setup-container/0.log" Jan 23 14:48:07 crc kubenswrapper[4775]: I0123 14:48:07.059773 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-server-0_70288c27-7f95-4843-a8fb-f2ac58ea8e1f/setup-container/0.log" Jan 23 14:48:07 crc kubenswrapper[4775]: I0123 14:48:07.062634 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-cell1-server-0_4b05c189-a694-4cbc-b679-a974e6bf99bc/rabbitmq/0.log" Jan 23 14:48:07 crc kubenswrapper[4775]: I0123 14:48:07.235019 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-server-0_70288c27-7f95-4843-a8fb-f2ac58ea8e1f/setup-container/0.log" Jan 23 14:48:07 crc kubenswrapper[4775]: I0123 14:48:07.259457 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/nova-kuttl-default_rabbitmq-server-0_70288c27-7f95-4843-a8fb-f2ac58ea8e1f/rabbitmq/0.log" Jan 23 14:48:23 crc kubenswrapper[4775]: I0123 14:48:23.016588 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j_44d1d9d6-a01e-49cc-8066-15c9954fda32/util/0.log" Jan 23 14:48:23 crc kubenswrapper[4775]: I0123 14:48:23.218475 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:48:23 crc kubenswrapper[4775]: I0123 14:48:23.218858 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:48:23 crc kubenswrapper[4775]: I0123 14:48:23.346606 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j_44d1d9d6-a01e-49cc-8066-15c9954fda32/util/0.log" Jan 23 14:48:23 crc kubenswrapper[4775]: I0123 14:48:23.383830 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j_44d1d9d6-a01e-49cc-8066-15c9954fda32/pull/0.log" Jan 23 14:48:23 crc kubenswrapper[4775]: I0123 14:48:23.402346 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j_44d1d9d6-a01e-49cc-8066-15c9954fda32/pull/0.log" Jan 23 14:48:23 crc kubenswrapper[4775]: I0123 14:48:23.581237 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j_44d1d9d6-a01e-49cc-8066-15c9954fda32/extract/0.log" Jan 23 14:48:23 crc kubenswrapper[4775]: I0123 14:48:23.584944 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j_44d1d9d6-a01e-49cc-8066-15c9954fda32/util/0.log" Jan 23 14:48:23 crc kubenswrapper[4775]: I0123 14:48:23.633724 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a7544j_44d1d9d6-a01e-49cc-8066-15c9954fda32/pull/0.log" Jan 23 14:48:23 crc kubenswrapper[4775]: I0123 14:48:23.767032 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f_6f15de03-78a8-4158-8a06-0174d617e32b/util/0.log" Jan 23 14:48:23 crc kubenswrapper[4775]: I0123 14:48:23.938652 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f_6f15de03-78a8-4158-8a06-0174d617e32b/util/0.log" Jan 23 14:48:23 crc kubenswrapper[4775]: I0123 14:48:23.989458 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f_6f15de03-78a8-4158-8a06-0174d617e32b/pull/0.log" Jan 23 14:48:23 crc kubenswrapper[4775]: I0123 14:48:23.993446 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f_6f15de03-78a8-4158-8a06-0174d617e32b/pull/0.log" Jan 23 14:48:24 crc kubenswrapper[4775]: I0123 14:48:24.138067 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f_6f15de03-78a8-4158-8a06-0174d617e32b/pull/0.log" Jan 23 14:48:24 crc kubenswrapper[4775]: I0123 14:48:24.145844 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f_6f15de03-78a8-4158-8a06-0174d617e32b/util/0.log" Jan 23 14:48:24 crc kubenswrapper[4775]: I0123 14:48:24.181204 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc9k76f_6f15de03-78a8-4158-8a06-0174d617e32b/extract/0.log" Jan 23 14:48:24 crc kubenswrapper[4775]: I0123 14:48:24.303511 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll_d4d873a3-d698-439c-a1de-c9a7fc9e1e6d/util/0.log" Jan 23 14:48:24 crc kubenswrapper[4775]: I0123 14:48:24.467381 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll_d4d873a3-d698-439c-a1de-c9a7fc9e1e6d/pull/0.log" Jan 23 14:48:24 crc kubenswrapper[4775]: I0123 14:48:24.474966 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll_d4d873a3-d698-439c-a1de-c9a7fc9e1e6d/util/0.log" Jan 23 14:48:24 crc kubenswrapper[4775]: I0123 14:48:24.485217 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll_d4d873a3-d698-439c-a1de-c9a7fc9e1e6d/pull/0.log" Jan 23 14:48:24 crc kubenswrapper[4775]: I0123 14:48:24.801819 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll_d4d873a3-d698-439c-a1de-c9a7fc9e1e6d/pull/0.log" Jan 23 14:48:24 crc kubenswrapper[4775]: I0123 14:48:24.923385 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll_d4d873a3-d698-439c-a1de-c9a7fc9e1e6d/extract/0.log" Jan 23 14:48:24 crc kubenswrapper[4775]: I0123 14:48:24.947407 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713h9cll_d4d873a3-d698-439c-a1de-c9a7fc9e1e6d/util/0.log" Jan 23 14:48:25 crc kubenswrapper[4775]: I0123 14:48:25.022353 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bb2pb_d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5/extract-utilities/0.log" Jan 23 14:48:25 crc kubenswrapper[4775]: I0123 14:48:25.216795 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bb2pb_d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5/extract-utilities/0.log" Jan 23 14:48:25 crc kubenswrapper[4775]: I0123 14:48:25.234027 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bb2pb_d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5/extract-content/0.log" Jan 23 14:48:25 crc kubenswrapper[4775]: I0123 14:48:25.267603 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bb2pb_d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5/extract-content/0.log" Jan 23 14:48:25 crc kubenswrapper[4775]: I0123 14:48:25.408624 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bb2pb_d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5/extract-utilities/0.log" Jan 23 14:48:25 crc kubenswrapper[4775]: I0123 14:48:25.482955 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bb2pb_d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5/extract-content/0.log" Jan 23 14:48:25 crc kubenswrapper[4775]: I0123 14:48:25.607728 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8jjcj_ed5c162e-62a9-4760-b5e0-a249a70225a0/extract-utilities/0.log" Jan 23 14:48:25 crc kubenswrapper[4775]: I0123 14:48:25.784729 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bb2pb_d9f7bf95-e60c-4dbb-bb9b-0a7c038871f5/registry-server/0.log" Jan 23 14:48:25 crc kubenswrapper[4775]: I0123 14:48:25.837202 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8jjcj_ed5c162e-62a9-4760-b5e0-a249a70225a0/extract-content/0.log" Jan 23 14:48:25 crc kubenswrapper[4775]: I0123 14:48:25.879491 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8jjcj_ed5c162e-62a9-4760-b5e0-a249a70225a0/extract-utilities/0.log" Jan 23 14:48:25 crc kubenswrapper[4775]: I0123 14:48:25.940965 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8jjcj_ed5c162e-62a9-4760-b5e0-a249a70225a0/extract-content/0.log" Jan 23 14:48:26 crc kubenswrapper[4775]: I0123 14:48:26.043940 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8jjcj_ed5c162e-62a9-4760-b5e0-a249a70225a0/extract-utilities/0.log" Jan 23 14:48:26 crc kubenswrapper[4775]: I0123 14:48:26.044425 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8jjcj_ed5c162e-62a9-4760-b5e0-a249a70225a0/extract-content/0.log" Jan 23 14:48:26 crc kubenswrapper[4775]: I0123 14:48:26.211403 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-24s7d_ffa6638c-aaa0-418b-ad22-e5532ae16f68/marketplace-operator/0.log" Jan 23 14:48:26 crc kubenswrapper[4775]: I0123 14:48:26.392766 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-8jjcj_ed5c162e-62a9-4760-b5e0-a249a70225a0/registry-server/0.log" Jan 23 14:48:26 crc kubenswrapper[4775]: I0123 14:48:26.414699 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fxcrw_39bc9387-f295-4aec-ad66-8831265c0400/extract-utilities/0.log" Jan 23 14:48:26 crc kubenswrapper[4775]: I0123 14:48:26.611000 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fxcrw_39bc9387-f295-4aec-ad66-8831265c0400/extract-content/0.log" Jan 23 14:48:26 crc kubenswrapper[4775]: I0123 14:48:26.626142 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fxcrw_39bc9387-f295-4aec-ad66-8831265c0400/extract-utilities/0.log" Jan 23 14:48:26 crc kubenswrapper[4775]: I0123 14:48:26.636055 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fxcrw_39bc9387-f295-4aec-ad66-8831265c0400/extract-content/0.log" Jan 23 14:48:26 crc kubenswrapper[4775]: I0123 14:48:26.788149 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fxcrw_39bc9387-f295-4aec-ad66-8831265c0400/extract-utilities/0.log" Jan 23 14:48:26 crc kubenswrapper[4775]: I0123 14:48:26.791661 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fxcrw_39bc9387-f295-4aec-ad66-8831265c0400/extract-content/0.log" Jan 23 14:48:26 crc kubenswrapper[4775]: I0123 14:48:26.889760 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sx4qm_0c94dee4-8e79-4f60-a8b9-2c1f33490ba7/extract-utilities/0.log" Jan 23 14:48:26 crc kubenswrapper[4775]: I0123 14:48:26.938032 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fxcrw_39bc9387-f295-4aec-ad66-8831265c0400/registry-server/0.log" Jan 23 14:48:27 crc kubenswrapper[4775]: I0123 14:48:27.026408 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sx4qm_0c94dee4-8e79-4f60-a8b9-2c1f33490ba7/extract-utilities/0.log" Jan 23 14:48:27 crc kubenswrapper[4775]: I0123 14:48:27.069118 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sx4qm_0c94dee4-8e79-4f60-a8b9-2c1f33490ba7/extract-content/0.log" Jan 23 14:48:27 crc kubenswrapper[4775]: I0123 14:48:27.096104 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sx4qm_0c94dee4-8e79-4f60-a8b9-2c1f33490ba7/extract-content/0.log" Jan 23 14:48:27 crc kubenswrapper[4775]: I0123 14:48:27.223886 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sx4qm_0c94dee4-8e79-4f60-a8b9-2c1f33490ba7/extract-content/0.log" Jan 23 14:48:27 crc kubenswrapper[4775]: I0123 14:48:27.226788 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sx4qm_0c94dee4-8e79-4f60-a8b9-2c1f33490ba7/extract-utilities/0.log" Jan 23 14:48:27 crc kubenswrapper[4775]: I0123 14:48:27.577329 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-sx4qm_0c94dee4-8e79-4f60-a8b9-2c1f33490ba7/registry-server/0.log" Jan 23 14:48:53 crc kubenswrapper[4775]: I0123 14:48:53.219010 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:48:53 crc kubenswrapper[4775]: I0123 14:48:53.221120 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:49:23 crc kubenswrapper[4775]: I0123 14:49:23.219521 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:49:23 crc kubenswrapper[4775]: I0123 14:49:23.220289 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:49:23 crc kubenswrapper[4775]: I0123 14:49:23.220362 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:49:23 crc kubenswrapper[4775]: I0123 14:49:23.221326 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fb9925329613a52dcbc6411915216316f974c31f7e89dd07fdacbd9dd078559f"} pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 14:49:23 crc kubenswrapper[4775]: I0123 14:49:23.221645 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" containerID="cri-o://fb9925329613a52dcbc6411915216316f974c31f7e89dd07fdacbd9dd078559f" gracePeriod=600 Jan 23 14:49:23 crc kubenswrapper[4775]: I0123 14:49:23.722533 4775 generic.go:334] "Generic (PLEG): container finished" podID="4fea0767-0566-4214-855d-ed0373946271" containerID="fb9925329613a52dcbc6411915216316f974c31f7e89dd07fdacbd9dd078559f" exitCode=0 Jan 23 14:49:23 crc kubenswrapper[4775]: I0123 14:49:23.728170 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerDied","Data":"fb9925329613a52dcbc6411915216316f974c31f7e89dd07fdacbd9dd078559f"} Jan 23 14:49:23 crc kubenswrapper[4775]: I0123 14:49:23.728241 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerStarted","Data":"b5e598cbf349da815af5db0b22df9dc34e13444bedef413becde0b98162db747"} Jan 23 14:49:23 crc kubenswrapper[4775]: I0123 14:49:23.728273 4775 scope.go:117] "RemoveContainer" containerID="607e4b420dc55958565e5ac75d3d168f04cf07a9f1d07d88493e707d7e21483d" Jan 23 14:49:44 crc kubenswrapper[4775]: I0123 14:49:44.958206 4775 generic.go:334] "Generic (PLEG): container finished" podID="41dd897c-4a67-4a0a-a7a3-c17b6d05653d" containerID="3998f4e1023e2b01b3b3037ee3f54b7b541f7dd5b790471a05de169061550d51" exitCode=0 Jan 23 14:49:44 crc kubenswrapper[4775]: I0123 14:49:44.958420 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-6vw8s/must-gather-9lvjt" event={"ID":"41dd897c-4a67-4a0a-a7a3-c17b6d05653d","Type":"ContainerDied","Data":"3998f4e1023e2b01b3b3037ee3f54b7b541f7dd5b790471a05de169061550d51"} Jan 23 14:49:44 crc kubenswrapper[4775]: I0123 14:49:44.959841 4775 scope.go:117] "RemoveContainer" containerID="3998f4e1023e2b01b3b3037ee3f54b7b541f7dd5b790471a05de169061550d51" Jan 23 14:49:45 crc kubenswrapper[4775]: I0123 14:49:45.712464 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6vw8s_must-gather-9lvjt_41dd897c-4a67-4a0a-a7a3-c17b6d05653d/gather/0.log" Jan 23 14:49:53 crc kubenswrapper[4775]: I0123 14:49:53.230590 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-6vw8s/must-gather-9lvjt"] Jan 23 14:49:53 crc kubenswrapper[4775]: I0123 14:49:53.231648 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-6vw8s/must-gather-9lvjt" podUID="41dd897c-4a67-4a0a-a7a3-c17b6d05653d" containerName="copy" containerID="cri-o://9795a40e8b362f20a5bafb6221130232aed660a8237ad820b9b5c489d963be47" gracePeriod=2 Jan 23 14:49:53 crc kubenswrapper[4775]: I0123 14:49:53.243143 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-6vw8s/must-gather-9lvjt"] Jan 23 14:49:53 crc kubenswrapper[4775]: I0123 14:49:53.636553 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6vw8s_must-gather-9lvjt_41dd897c-4a67-4a0a-a7a3-c17b6d05653d/copy/0.log" Jan 23 14:49:53 crc kubenswrapper[4775]: I0123 14:49:53.637938 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6vw8s/must-gather-9lvjt" Jan 23 14:49:53 crc kubenswrapper[4775]: I0123 14:49:53.722272 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86lv4\" (UniqueName: \"kubernetes.io/projected/41dd897c-4a67-4a0a-a7a3-c17b6d05653d-kube-api-access-86lv4\") pod \"41dd897c-4a67-4a0a-a7a3-c17b6d05653d\" (UID: \"41dd897c-4a67-4a0a-a7a3-c17b6d05653d\") " Jan 23 14:49:53 crc kubenswrapper[4775]: I0123 14:49:53.722629 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/41dd897c-4a67-4a0a-a7a3-c17b6d05653d-must-gather-output\") pod \"41dd897c-4a67-4a0a-a7a3-c17b6d05653d\" (UID: \"41dd897c-4a67-4a0a-a7a3-c17b6d05653d\") " Jan 23 14:49:53 crc kubenswrapper[4775]: I0123 14:49:53.731541 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41dd897c-4a67-4a0a-a7a3-c17b6d05653d-kube-api-access-86lv4" (OuterVolumeSpecName: "kube-api-access-86lv4") pod "41dd897c-4a67-4a0a-a7a3-c17b6d05653d" (UID: "41dd897c-4a67-4a0a-a7a3-c17b6d05653d"). InnerVolumeSpecName "kube-api-access-86lv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:49:53 crc kubenswrapper[4775]: I0123 14:49:53.831158 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86lv4\" (UniqueName: \"kubernetes.io/projected/41dd897c-4a67-4a0a-a7a3-c17b6d05653d-kube-api-access-86lv4\") on node \"crc\" DevicePath \"\"" Jan 23 14:49:53 crc kubenswrapper[4775]: I0123 14:49:53.907859 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41dd897c-4a67-4a0a-a7a3-c17b6d05653d-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "41dd897c-4a67-4a0a-a7a3-c17b6d05653d" (UID: "41dd897c-4a67-4a0a-a7a3-c17b6d05653d"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:49:53 crc kubenswrapper[4775]: I0123 14:49:53.932940 4775 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/41dd897c-4a67-4a0a-a7a3-c17b6d05653d-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 23 14:49:54 crc kubenswrapper[4775]: I0123 14:49:54.056693 4775 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-6vw8s_must-gather-9lvjt_41dd897c-4a67-4a0a-a7a3-c17b6d05653d/copy/0.log" Jan 23 14:49:54 crc kubenswrapper[4775]: I0123 14:49:54.057225 4775 generic.go:334] "Generic (PLEG): container finished" podID="41dd897c-4a67-4a0a-a7a3-c17b6d05653d" containerID="9795a40e8b362f20a5bafb6221130232aed660a8237ad820b9b5c489d963be47" exitCode=143 Jan 23 14:49:54 crc kubenswrapper[4775]: I0123 14:49:54.057318 4775 scope.go:117] "RemoveContainer" containerID="9795a40e8b362f20a5bafb6221130232aed660a8237ad820b9b5c489d963be47" Jan 23 14:49:54 crc kubenswrapper[4775]: I0123 14:49:54.057352 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-6vw8s/must-gather-9lvjt" Jan 23 14:49:54 crc kubenswrapper[4775]: I0123 14:49:54.088381 4775 scope.go:117] "RemoveContainer" containerID="3998f4e1023e2b01b3b3037ee3f54b7b541f7dd5b790471a05de169061550d51" Jan 23 14:49:54 crc kubenswrapper[4775]: I0123 14:49:54.170447 4775 scope.go:117] "RemoveContainer" containerID="9795a40e8b362f20a5bafb6221130232aed660a8237ad820b9b5c489d963be47" Jan 23 14:49:54 crc kubenswrapper[4775]: E0123 14:49:54.171478 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9795a40e8b362f20a5bafb6221130232aed660a8237ad820b9b5c489d963be47\": container with ID starting with 9795a40e8b362f20a5bafb6221130232aed660a8237ad820b9b5c489d963be47 not found: ID does not exist" containerID="9795a40e8b362f20a5bafb6221130232aed660a8237ad820b9b5c489d963be47" Jan 23 14:49:54 crc kubenswrapper[4775]: I0123 14:49:54.171510 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9795a40e8b362f20a5bafb6221130232aed660a8237ad820b9b5c489d963be47"} err="failed to get container status \"9795a40e8b362f20a5bafb6221130232aed660a8237ad820b9b5c489d963be47\": rpc error: code = NotFound desc = could not find container \"9795a40e8b362f20a5bafb6221130232aed660a8237ad820b9b5c489d963be47\": container with ID starting with 9795a40e8b362f20a5bafb6221130232aed660a8237ad820b9b5c489d963be47 not found: ID does not exist" Jan 23 14:49:54 crc kubenswrapper[4775]: I0123 14:49:54.171530 4775 scope.go:117] "RemoveContainer" containerID="3998f4e1023e2b01b3b3037ee3f54b7b541f7dd5b790471a05de169061550d51" Jan 23 14:49:54 crc kubenswrapper[4775]: E0123 14:49:54.172032 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3998f4e1023e2b01b3b3037ee3f54b7b541f7dd5b790471a05de169061550d51\": container with ID starting with 3998f4e1023e2b01b3b3037ee3f54b7b541f7dd5b790471a05de169061550d51 not found: ID does not exist" containerID="3998f4e1023e2b01b3b3037ee3f54b7b541f7dd5b790471a05de169061550d51" Jan 23 14:49:54 crc kubenswrapper[4775]: I0123 14:49:54.172062 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3998f4e1023e2b01b3b3037ee3f54b7b541f7dd5b790471a05de169061550d51"} err="failed to get container status \"3998f4e1023e2b01b3b3037ee3f54b7b541f7dd5b790471a05de169061550d51\": rpc error: code = NotFound desc = could not find container \"3998f4e1023e2b01b3b3037ee3f54b7b541f7dd5b790471a05de169061550d51\": container with ID starting with 3998f4e1023e2b01b3b3037ee3f54b7b541f7dd5b790471a05de169061550d51 not found: ID does not exist" Jan 23 14:49:55 crc kubenswrapper[4775]: I0123 14:49:55.731562 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41dd897c-4a67-4a0a-a7a3-c17b6d05653d" path="/var/lib/kubelet/pods/41dd897c-4a67-4a0a-a7a3-c17b6d05653d/volumes" Jan 23 14:50:02 crc kubenswrapper[4775]: I0123 14:50:02.894160 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bl77g"] Jan 23 14:50:02 crc kubenswrapper[4775]: E0123 14:50:02.895046 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41dd897c-4a67-4a0a-a7a3-c17b6d05653d" containerName="copy" Jan 23 14:50:02 crc kubenswrapper[4775]: I0123 14:50:02.895062 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="41dd897c-4a67-4a0a-a7a3-c17b6d05653d" containerName="copy" Jan 23 14:50:02 crc kubenswrapper[4775]: E0123 14:50:02.895076 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a0e2681-58a7-4050-9dd0-3b0d77bdde6c" containerName="extract-utilities" Jan 23 14:50:02 crc kubenswrapper[4775]: I0123 14:50:02.895083 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a0e2681-58a7-4050-9dd0-3b0d77bdde6c" containerName="extract-utilities" Jan 23 14:50:02 crc kubenswrapper[4775]: E0123 14:50:02.895103 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5820a548-636b-4a69-b8d6-b947ee11e3fd" containerName="extract-utilities" Jan 23 14:50:02 crc kubenswrapper[4775]: I0123 14:50:02.895113 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="5820a548-636b-4a69-b8d6-b947ee11e3fd" containerName="extract-utilities" Jan 23 14:50:02 crc kubenswrapper[4775]: E0123 14:50:02.895133 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5820a548-636b-4a69-b8d6-b947ee11e3fd" containerName="extract-content" Jan 23 14:50:02 crc kubenswrapper[4775]: I0123 14:50:02.895140 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="5820a548-636b-4a69-b8d6-b947ee11e3fd" containerName="extract-content" Jan 23 14:50:02 crc kubenswrapper[4775]: E0123 14:50:02.895155 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a0e2681-58a7-4050-9dd0-3b0d77bdde6c" containerName="registry-server" Jan 23 14:50:02 crc kubenswrapper[4775]: I0123 14:50:02.895162 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a0e2681-58a7-4050-9dd0-3b0d77bdde6c" containerName="registry-server" Jan 23 14:50:02 crc kubenswrapper[4775]: E0123 14:50:02.895170 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8" containerName="extract-utilities" Jan 23 14:50:02 crc kubenswrapper[4775]: I0123 14:50:02.895178 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8" containerName="extract-utilities" Jan 23 14:50:02 crc kubenswrapper[4775]: E0123 14:50:02.895186 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8" containerName="extract-content" Jan 23 14:50:02 crc kubenswrapper[4775]: I0123 14:50:02.895192 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8" containerName="extract-content" Jan 23 14:50:02 crc kubenswrapper[4775]: E0123 14:50:02.895202 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5820a548-636b-4a69-b8d6-b947ee11e3fd" containerName="registry-server" Jan 23 14:50:02 crc kubenswrapper[4775]: I0123 14:50:02.895209 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="5820a548-636b-4a69-b8d6-b947ee11e3fd" containerName="registry-server" Jan 23 14:50:02 crc kubenswrapper[4775]: E0123 14:50:02.895221 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a0e2681-58a7-4050-9dd0-3b0d77bdde6c" containerName="extract-content" Jan 23 14:50:02 crc kubenswrapper[4775]: I0123 14:50:02.895229 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a0e2681-58a7-4050-9dd0-3b0d77bdde6c" containerName="extract-content" Jan 23 14:50:02 crc kubenswrapper[4775]: E0123 14:50:02.895236 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41dd897c-4a67-4a0a-a7a3-c17b6d05653d" containerName="gather" Jan 23 14:50:02 crc kubenswrapper[4775]: I0123 14:50:02.895242 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="41dd897c-4a67-4a0a-a7a3-c17b6d05653d" containerName="gather" Jan 23 14:50:02 crc kubenswrapper[4775]: E0123 14:50:02.895255 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8" containerName="registry-server" Jan 23 14:50:02 crc kubenswrapper[4775]: I0123 14:50:02.895262 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8" containerName="registry-server" Jan 23 14:50:02 crc kubenswrapper[4775]: I0123 14:50:02.895421 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a0e2681-58a7-4050-9dd0-3b0d77bdde6c" containerName="registry-server" Jan 23 14:50:02 crc kubenswrapper[4775]: I0123 14:50:02.895437 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="5820a548-636b-4a69-b8d6-b947ee11e3fd" containerName="registry-server" Jan 23 14:50:02 crc kubenswrapper[4775]: I0123 14:50:02.895458 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="721aa0ee-a7d9-4b8c-abb6-d0d6bcf2d4e8" containerName="registry-server" Jan 23 14:50:02 crc kubenswrapper[4775]: I0123 14:50:02.895468 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="41dd897c-4a67-4a0a-a7a3-c17b6d05653d" containerName="copy" Jan 23 14:50:02 crc kubenswrapper[4775]: I0123 14:50:02.895481 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="41dd897c-4a67-4a0a-a7a3-c17b6d05653d" containerName="gather" Jan 23 14:50:02 crc kubenswrapper[4775]: I0123 14:50:02.896811 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bl77g" Jan 23 14:50:02 crc kubenswrapper[4775]: I0123 14:50:02.931740 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bl77g"] Jan 23 14:50:02 crc kubenswrapper[4775]: I0123 14:50:02.995218 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2c3db3a-a4f0-42e0-95dd-0098e860d77a-catalog-content\") pod \"redhat-operators-bl77g\" (UID: \"a2c3db3a-a4f0-42e0-95dd-0098e860d77a\") " pod="openshift-marketplace/redhat-operators-bl77g" Jan 23 14:50:02 crc kubenswrapper[4775]: I0123 14:50:02.995498 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2c3db3a-a4f0-42e0-95dd-0098e860d77a-utilities\") pod \"redhat-operators-bl77g\" (UID: \"a2c3db3a-a4f0-42e0-95dd-0098e860d77a\") " pod="openshift-marketplace/redhat-operators-bl77g" Jan 23 14:50:02 crc kubenswrapper[4775]: I0123 14:50:02.995678 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdl82\" (UniqueName: \"kubernetes.io/projected/a2c3db3a-a4f0-42e0-95dd-0098e860d77a-kube-api-access-fdl82\") pod \"redhat-operators-bl77g\" (UID: \"a2c3db3a-a4f0-42e0-95dd-0098e860d77a\") " pod="openshift-marketplace/redhat-operators-bl77g" Jan 23 14:50:03 crc kubenswrapper[4775]: I0123 14:50:03.096679 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2c3db3a-a4f0-42e0-95dd-0098e860d77a-catalog-content\") pod \"redhat-operators-bl77g\" (UID: \"a2c3db3a-a4f0-42e0-95dd-0098e860d77a\") " pod="openshift-marketplace/redhat-operators-bl77g" Jan 23 14:50:03 crc kubenswrapper[4775]: I0123 14:50:03.097399 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2c3db3a-a4f0-42e0-95dd-0098e860d77a-utilities\") pod \"redhat-operators-bl77g\" (UID: \"a2c3db3a-a4f0-42e0-95dd-0098e860d77a\") " pod="openshift-marketplace/redhat-operators-bl77g" Jan 23 14:50:03 crc kubenswrapper[4775]: I0123 14:50:03.097351 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2c3db3a-a4f0-42e0-95dd-0098e860d77a-catalog-content\") pod \"redhat-operators-bl77g\" (UID: \"a2c3db3a-a4f0-42e0-95dd-0098e860d77a\") " pod="openshift-marketplace/redhat-operators-bl77g" Jan 23 14:50:03 crc kubenswrapper[4775]: I0123 14:50:03.097677 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2c3db3a-a4f0-42e0-95dd-0098e860d77a-utilities\") pod \"redhat-operators-bl77g\" (UID: \"a2c3db3a-a4f0-42e0-95dd-0098e860d77a\") " pod="openshift-marketplace/redhat-operators-bl77g" Jan 23 14:50:03 crc kubenswrapper[4775]: I0123 14:50:03.097971 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdl82\" (UniqueName: \"kubernetes.io/projected/a2c3db3a-a4f0-42e0-95dd-0098e860d77a-kube-api-access-fdl82\") pod \"redhat-operators-bl77g\" (UID: \"a2c3db3a-a4f0-42e0-95dd-0098e860d77a\") " pod="openshift-marketplace/redhat-operators-bl77g" Jan 23 14:50:03 crc kubenswrapper[4775]: I0123 14:50:03.115417 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdl82\" (UniqueName: \"kubernetes.io/projected/a2c3db3a-a4f0-42e0-95dd-0098e860d77a-kube-api-access-fdl82\") pod \"redhat-operators-bl77g\" (UID: \"a2c3db3a-a4f0-42e0-95dd-0098e860d77a\") " pod="openshift-marketplace/redhat-operators-bl77g" Jan 23 14:50:03 crc kubenswrapper[4775]: I0123 14:50:03.218632 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bl77g" Jan 23 14:50:03 crc kubenswrapper[4775]: I0123 14:50:03.659691 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bl77g"] Jan 23 14:50:04 crc kubenswrapper[4775]: I0123 14:50:04.175719 4775 generic.go:334] "Generic (PLEG): container finished" podID="a2c3db3a-a4f0-42e0-95dd-0098e860d77a" containerID="86e48ca1568a00ced41e2a25c3c56d895ec4ddb7579c973ca0d7b9bf9c7cb176" exitCode=0 Jan 23 14:50:04 crc kubenswrapper[4775]: I0123 14:50:04.175872 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bl77g" event={"ID":"a2c3db3a-a4f0-42e0-95dd-0098e860d77a","Type":"ContainerDied","Data":"86e48ca1568a00ced41e2a25c3c56d895ec4ddb7579c973ca0d7b9bf9c7cb176"} Jan 23 14:50:04 crc kubenswrapper[4775]: I0123 14:50:04.176214 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bl77g" event={"ID":"a2c3db3a-a4f0-42e0-95dd-0098e860d77a","Type":"ContainerStarted","Data":"2e47c4962c77a254f758bcf21d44c4606a2440152efa666525d3e342f58f6a2c"} Jan 23 14:50:04 crc kubenswrapper[4775]: I0123 14:50:04.178601 4775 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 14:50:05 crc kubenswrapper[4775]: I0123 14:50:05.186795 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bl77g" event={"ID":"a2c3db3a-a4f0-42e0-95dd-0098e860d77a","Type":"ContainerStarted","Data":"477e40967f5791e68c5432610a5e9b577d2d3567aff6149f47fcb3e320c71146"} Jan 23 14:50:06 crc kubenswrapper[4775]: I0123 14:50:06.204931 4775 generic.go:334] "Generic (PLEG): container finished" podID="a2c3db3a-a4f0-42e0-95dd-0098e860d77a" containerID="477e40967f5791e68c5432610a5e9b577d2d3567aff6149f47fcb3e320c71146" exitCode=0 Jan 23 14:50:06 crc kubenswrapper[4775]: I0123 14:50:06.205023 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bl77g" event={"ID":"a2c3db3a-a4f0-42e0-95dd-0098e860d77a","Type":"ContainerDied","Data":"477e40967f5791e68c5432610a5e9b577d2d3567aff6149f47fcb3e320c71146"} Jan 23 14:50:07 crc kubenswrapper[4775]: I0123 14:50:07.215682 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bl77g" event={"ID":"a2c3db3a-a4f0-42e0-95dd-0098e860d77a","Type":"ContainerStarted","Data":"880664f75a1c2365c45c7d84855873557a42cb4d3a2067a3e14c4b3387bc5302"} Jan 23 14:50:07 crc kubenswrapper[4775]: I0123 14:50:07.241357 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bl77g" podStartSLOduration=2.715737217 podStartE2EDuration="5.241339173s" podCreationTimestamp="2026-01-23 14:50:02 +0000 UTC" firstStartedPulling="2026-01-23 14:50:04.177835081 +0000 UTC m=+2751.172663851" lastFinishedPulling="2026-01-23 14:50:06.703437067 +0000 UTC m=+2753.698265807" observedRunningTime="2026-01-23 14:50:07.238523414 +0000 UTC m=+2754.233352164" watchObservedRunningTime="2026-01-23 14:50:07.241339173 +0000 UTC m=+2754.236167923" Jan 23 14:50:13 crc kubenswrapper[4775]: I0123 14:50:13.219125 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bl77g" Jan 23 14:50:13 crc kubenswrapper[4775]: I0123 14:50:13.219837 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bl77g" Jan 23 14:50:14 crc kubenswrapper[4775]: I0123 14:50:14.294142 4775 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bl77g" podUID="a2c3db3a-a4f0-42e0-95dd-0098e860d77a" containerName="registry-server" probeResult="failure" output=< Jan 23 14:50:14 crc kubenswrapper[4775]: timeout: failed to connect service ":50051" within 1s Jan 23 14:50:14 crc kubenswrapper[4775]: > Jan 23 14:50:23 crc kubenswrapper[4775]: I0123 14:50:23.295485 4775 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bl77g" Jan 23 14:50:23 crc kubenswrapper[4775]: I0123 14:50:23.366928 4775 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bl77g" Jan 23 14:50:24 crc kubenswrapper[4775]: I0123 14:50:24.082413 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bl77g"] Jan 23 14:50:24 crc kubenswrapper[4775]: I0123 14:50:24.407121 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bl77g" podUID="a2c3db3a-a4f0-42e0-95dd-0098e860d77a" containerName="registry-server" containerID="cri-o://880664f75a1c2365c45c7d84855873557a42cb4d3a2067a3e14c4b3387bc5302" gracePeriod=2 Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.004886 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bl77g" Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.123981 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2c3db3a-a4f0-42e0-95dd-0098e860d77a-catalog-content\") pod \"a2c3db3a-a4f0-42e0-95dd-0098e860d77a\" (UID: \"a2c3db3a-a4f0-42e0-95dd-0098e860d77a\") " Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.124098 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdl82\" (UniqueName: \"kubernetes.io/projected/a2c3db3a-a4f0-42e0-95dd-0098e860d77a-kube-api-access-fdl82\") pod \"a2c3db3a-a4f0-42e0-95dd-0098e860d77a\" (UID: \"a2c3db3a-a4f0-42e0-95dd-0098e860d77a\") " Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.124347 4775 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2c3db3a-a4f0-42e0-95dd-0098e860d77a-utilities\") pod \"a2c3db3a-a4f0-42e0-95dd-0098e860d77a\" (UID: \"a2c3db3a-a4f0-42e0-95dd-0098e860d77a\") " Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.126018 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2c3db3a-a4f0-42e0-95dd-0098e860d77a-utilities" (OuterVolumeSpecName: "utilities") pod "a2c3db3a-a4f0-42e0-95dd-0098e860d77a" (UID: "a2c3db3a-a4f0-42e0-95dd-0098e860d77a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.131667 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2c3db3a-a4f0-42e0-95dd-0098e860d77a-kube-api-access-fdl82" (OuterVolumeSpecName: "kube-api-access-fdl82") pod "a2c3db3a-a4f0-42e0-95dd-0098e860d77a" (UID: "a2c3db3a-a4f0-42e0-95dd-0098e860d77a"). InnerVolumeSpecName "kube-api-access-fdl82". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.226703 4775 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdl82\" (UniqueName: \"kubernetes.io/projected/a2c3db3a-a4f0-42e0-95dd-0098e860d77a-kube-api-access-fdl82\") on node \"crc\" DevicePath \"\"" Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.226742 4775 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a2c3db3a-a4f0-42e0-95dd-0098e860d77a-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.255062 4775 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2c3db3a-a4f0-42e0-95dd-0098e860d77a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a2c3db3a-a4f0-42e0-95dd-0098e860d77a" (UID: "a2c3db3a-a4f0-42e0-95dd-0098e860d77a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.329006 4775 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a2c3db3a-a4f0-42e0-95dd-0098e860d77a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.424202 4775 generic.go:334] "Generic (PLEG): container finished" podID="a2c3db3a-a4f0-42e0-95dd-0098e860d77a" containerID="880664f75a1c2365c45c7d84855873557a42cb4d3a2067a3e14c4b3387bc5302" exitCode=0 Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.424262 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bl77g" event={"ID":"a2c3db3a-a4f0-42e0-95dd-0098e860d77a","Type":"ContainerDied","Data":"880664f75a1c2365c45c7d84855873557a42cb4d3a2067a3e14c4b3387bc5302"} Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.424303 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bl77g" event={"ID":"a2c3db3a-a4f0-42e0-95dd-0098e860d77a","Type":"ContainerDied","Data":"2e47c4962c77a254f758bcf21d44c4606a2440152efa666525d3e342f58f6a2c"} Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.424335 4775 scope.go:117] "RemoveContainer" containerID="880664f75a1c2365c45c7d84855873557a42cb4d3a2067a3e14c4b3387bc5302" Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.424524 4775 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bl77g" Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.478683 4775 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bl77g"] Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.481399 4775 scope.go:117] "RemoveContainer" containerID="477e40967f5791e68c5432610a5e9b577d2d3567aff6149f47fcb3e320c71146" Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.489440 4775 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bl77g"] Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.518915 4775 scope.go:117] "RemoveContainer" containerID="86e48ca1568a00ced41e2a25c3c56d895ec4ddb7579c973ca0d7b9bf9c7cb176" Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.565149 4775 scope.go:117] "RemoveContainer" containerID="880664f75a1c2365c45c7d84855873557a42cb4d3a2067a3e14c4b3387bc5302" Jan 23 14:50:25 crc kubenswrapper[4775]: E0123 14:50:25.565879 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"880664f75a1c2365c45c7d84855873557a42cb4d3a2067a3e14c4b3387bc5302\": container with ID starting with 880664f75a1c2365c45c7d84855873557a42cb4d3a2067a3e14c4b3387bc5302 not found: ID does not exist" containerID="880664f75a1c2365c45c7d84855873557a42cb4d3a2067a3e14c4b3387bc5302" Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.565920 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"880664f75a1c2365c45c7d84855873557a42cb4d3a2067a3e14c4b3387bc5302"} err="failed to get container status \"880664f75a1c2365c45c7d84855873557a42cb4d3a2067a3e14c4b3387bc5302\": rpc error: code = NotFound desc = could not find container \"880664f75a1c2365c45c7d84855873557a42cb4d3a2067a3e14c4b3387bc5302\": container with ID starting with 880664f75a1c2365c45c7d84855873557a42cb4d3a2067a3e14c4b3387bc5302 not found: ID does not exist" Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.565951 4775 scope.go:117] "RemoveContainer" containerID="477e40967f5791e68c5432610a5e9b577d2d3567aff6149f47fcb3e320c71146" Jan 23 14:50:25 crc kubenswrapper[4775]: E0123 14:50:25.566250 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"477e40967f5791e68c5432610a5e9b577d2d3567aff6149f47fcb3e320c71146\": container with ID starting with 477e40967f5791e68c5432610a5e9b577d2d3567aff6149f47fcb3e320c71146 not found: ID does not exist" containerID="477e40967f5791e68c5432610a5e9b577d2d3567aff6149f47fcb3e320c71146" Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.566280 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"477e40967f5791e68c5432610a5e9b577d2d3567aff6149f47fcb3e320c71146"} err="failed to get container status \"477e40967f5791e68c5432610a5e9b577d2d3567aff6149f47fcb3e320c71146\": rpc error: code = NotFound desc = could not find container \"477e40967f5791e68c5432610a5e9b577d2d3567aff6149f47fcb3e320c71146\": container with ID starting with 477e40967f5791e68c5432610a5e9b577d2d3567aff6149f47fcb3e320c71146 not found: ID does not exist" Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.566298 4775 scope.go:117] "RemoveContainer" containerID="86e48ca1568a00ced41e2a25c3c56d895ec4ddb7579c973ca0d7b9bf9c7cb176" Jan 23 14:50:25 crc kubenswrapper[4775]: E0123 14:50:25.566740 4775 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86e48ca1568a00ced41e2a25c3c56d895ec4ddb7579c973ca0d7b9bf9c7cb176\": container with ID starting with 86e48ca1568a00ced41e2a25c3c56d895ec4ddb7579c973ca0d7b9bf9c7cb176 not found: ID does not exist" containerID="86e48ca1568a00ced41e2a25c3c56d895ec4ddb7579c973ca0d7b9bf9c7cb176" Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.566771 4775 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86e48ca1568a00ced41e2a25c3c56d895ec4ddb7579c973ca0d7b9bf9c7cb176"} err="failed to get container status \"86e48ca1568a00ced41e2a25c3c56d895ec4ddb7579c973ca0d7b9bf9c7cb176\": rpc error: code = NotFound desc = could not find container \"86e48ca1568a00ced41e2a25c3c56d895ec4ddb7579c973ca0d7b9bf9c7cb176\": container with ID starting with 86e48ca1568a00ced41e2a25c3c56d895ec4ddb7579c973ca0d7b9bf9c7cb176 not found: ID does not exist" Jan 23 14:50:25 crc kubenswrapper[4775]: I0123 14:50:25.726633 4775 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2c3db3a-a4f0-42e0-95dd-0098e860d77a" path="/var/lib/kubelet/pods/a2c3db3a-a4f0-42e0-95dd-0098e860d77a/volumes" Jan 23 14:51:23 crc kubenswrapper[4775]: I0123 14:51:23.219177 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:51:23 crc kubenswrapper[4775]: I0123 14:51:23.219839 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:51:53 crc kubenswrapper[4775]: I0123 14:51:53.219469 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:51:53 crc kubenswrapper[4775]: I0123 14:51:53.220644 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:52:23 crc kubenswrapper[4775]: I0123 14:52:23.218841 4775 patch_prober.go:28] interesting pod/machine-config-daemon-4q9qg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 14:52:23 crc kubenswrapper[4775]: I0123 14:52:23.222037 4775 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 14:52:23 crc kubenswrapper[4775]: I0123 14:52:23.222330 4775 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" Jan 23 14:52:23 crc kubenswrapper[4775]: I0123 14:52:23.224280 4775 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b5e598cbf349da815af5db0b22df9dc34e13444bedef413becde0b98162db747"} pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 14:52:23 crc kubenswrapper[4775]: I0123 14:52:23.224625 4775 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" containerName="machine-config-daemon" containerID="cri-o://b5e598cbf349da815af5db0b22df9dc34e13444bedef413becde0b98162db747" gracePeriod=600 Jan 23 14:52:23 crc kubenswrapper[4775]: E0123 14:52:23.359516 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:52:24 crc kubenswrapper[4775]: I0123 14:52:24.229997 4775 generic.go:334] "Generic (PLEG): container finished" podID="4fea0767-0566-4214-855d-ed0373946271" containerID="b5e598cbf349da815af5db0b22df9dc34e13444bedef413becde0b98162db747" exitCode=0 Jan 23 14:52:24 crc kubenswrapper[4775]: I0123 14:52:24.230955 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" event={"ID":"4fea0767-0566-4214-855d-ed0373946271","Type":"ContainerDied","Data":"b5e598cbf349da815af5db0b22df9dc34e13444bedef413becde0b98162db747"} Jan 23 14:52:24 crc kubenswrapper[4775]: I0123 14:52:24.231082 4775 scope.go:117] "RemoveContainer" containerID="fb9925329613a52dcbc6411915216316f974c31f7e89dd07fdacbd9dd078559f" Jan 23 14:52:24 crc kubenswrapper[4775]: I0123 14:52:24.232018 4775 scope.go:117] "RemoveContainer" containerID="b5e598cbf349da815af5db0b22df9dc34e13444bedef413becde0b98162db747" Jan 23 14:52:24 crc kubenswrapper[4775]: E0123 14:52:24.232494 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:52:35 crc kubenswrapper[4775]: I0123 14:52:35.713853 4775 scope.go:117] "RemoveContainer" containerID="b5e598cbf349da815af5db0b22df9dc34e13444bedef413becde0b98162db747" Jan 23 14:52:35 crc kubenswrapper[4775]: E0123 14:52:35.715135 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:52:48 crc kubenswrapper[4775]: I0123 14:52:48.714688 4775 scope.go:117] "RemoveContainer" containerID="b5e598cbf349da815af5db0b22df9dc34e13444bedef413becde0b98162db747" Jan 23 14:52:48 crc kubenswrapper[4775]: E0123 14:52:48.715788 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:53:01 crc kubenswrapper[4775]: I0123 14:53:01.196916 4775 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb"] Jan 23 14:53:01 crc kubenswrapper[4775]: E0123 14:53:01.197636 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2c3db3a-a4f0-42e0-95dd-0098e860d77a" containerName="registry-server" Jan 23 14:53:01 crc kubenswrapper[4775]: I0123 14:53:01.197649 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2c3db3a-a4f0-42e0-95dd-0098e860d77a" containerName="registry-server" Jan 23 14:53:01 crc kubenswrapper[4775]: E0123 14:53:01.197667 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2c3db3a-a4f0-42e0-95dd-0098e860d77a" containerName="extract-content" Jan 23 14:53:01 crc kubenswrapper[4775]: I0123 14:53:01.197672 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2c3db3a-a4f0-42e0-95dd-0098e860d77a" containerName="extract-content" Jan 23 14:53:01 crc kubenswrapper[4775]: E0123 14:53:01.197685 4775 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2c3db3a-a4f0-42e0-95dd-0098e860d77a" containerName="extract-utilities" Jan 23 14:53:01 crc kubenswrapper[4775]: I0123 14:53:01.197691 4775 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2c3db3a-a4f0-42e0-95dd-0098e860d77a" containerName="extract-utilities" Jan 23 14:53:01 crc kubenswrapper[4775]: I0123 14:53:01.197843 4775 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2c3db3a-a4f0-42e0-95dd-0098e860d77a" containerName="registry-server" Jan 23 14:53:01 crc kubenswrapper[4775]: I0123 14:53:01.198371 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" Jan 23 14:53:01 crc kubenswrapper[4775]: I0123 14:53:01.200499 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-config-data" Jan 23 14:53:01 crc kubenswrapper[4775]: I0123 14:53:01.200914 4775 reflector.go:368] Caches populated for *v1.Secret from object-"nova-kuttl-default"/"nova-kuttl-cell0-manage-scripts" Jan 23 14:53:01 crc kubenswrapper[4775]: I0123 14:53:01.246781 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb"] Jan 23 14:53:01 crc kubenswrapper[4775]: I0123 14:53:01.248398 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f75714c3-400a-4e4a-b1b4-220a7b426db4-scripts\") pod \"nova-kuttl-cell1-cell-delete-6p9nb\" (UID: \"f75714c3-400a-4e4a-b1b4-220a7b426db4\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" Jan 23 14:53:01 crc kubenswrapper[4775]: I0123 14:53:01.248497 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f75714c3-400a-4e4a-b1b4-220a7b426db4-config-data\") pod \"nova-kuttl-cell1-cell-delete-6p9nb\" (UID: \"f75714c3-400a-4e4a-b1b4-220a7b426db4\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" Jan 23 14:53:01 crc kubenswrapper[4775]: I0123 14:53:01.248524 4775 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2vvm\" (UniqueName: \"kubernetes.io/projected/f75714c3-400a-4e4a-b1b4-220a7b426db4-kube-api-access-d2vvm\") pod \"nova-kuttl-cell1-cell-delete-6p9nb\" (UID: \"f75714c3-400a-4e4a-b1b4-220a7b426db4\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" Jan 23 14:53:01 crc kubenswrapper[4775]: I0123 14:53:01.349679 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f75714c3-400a-4e4a-b1b4-220a7b426db4-scripts\") pod \"nova-kuttl-cell1-cell-delete-6p9nb\" (UID: \"f75714c3-400a-4e4a-b1b4-220a7b426db4\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" Jan 23 14:53:01 crc kubenswrapper[4775]: I0123 14:53:01.349780 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f75714c3-400a-4e4a-b1b4-220a7b426db4-config-data\") pod \"nova-kuttl-cell1-cell-delete-6p9nb\" (UID: \"f75714c3-400a-4e4a-b1b4-220a7b426db4\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" Jan 23 14:53:01 crc kubenswrapper[4775]: I0123 14:53:01.349822 4775 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2vvm\" (UniqueName: \"kubernetes.io/projected/f75714c3-400a-4e4a-b1b4-220a7b426db4-kube-api-access-d2vvm\") pod \"nova-kuttl-cell1-cell-delete-6p9nb\" (UID: \"f75714c3-400a-4e4a-b1b4-220a7b426db4\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" Jan 23 14:53:01 crc kubenswrapper[4775]: I0123 14:53:01.356304 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f75714c3-400a-4e4a-b1b4-220a7b426db4-config-data\") pod \"nova-kuttl-cell1-cell-delete-6p9nb\" (UID: \"f75714c3-400a-4e4a-b1b4-220a7b426db4\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" Jan 23 14:53:01 crc kubenswrapper[4775]: I0123 14:53:01.356533 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f75714c3-400a-4e4a-b1b4-220a7b426db4-scripts\") pod \"nova-kuttl-cell1-cell-delete-6p9nb\" (UID: \"f75714c3-400a-4e4a-b1b4-220a7b426db4\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" Jan 23 14:53:01 crc kubenswrapper[4775]: I0123 14:53:01.367339 4775 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2vvm\" (UniqueName: \"kubernetes.io/projected/f75714c3-400a-4e4a-b1b4-220a7b426db4-kube-api-access-d2vvm\") pod \"nova-kuttl-cell1-cell-delete-6p9nb\" (UID: \"f75714c3-400a-4e4a-b1b4-220a7b426db4\") " pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" Jan 23 14:53:01 crc kubenswrapper[4775]: I0123 14:53:01.525341 4775 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" Jan 23 14:53:02 crc kubenswrapper[4775]: I0123 14:53:02.033144 4775 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb"] Jan 23 14:53:02 crc kubenswrapper[4775]: I0123 14:53:02.631898 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" event={"ID":"f75714c3-400a-4e4a-b1b4-220a7b426db4","Type":"ContainerStarted","Data":"e95d4dff0e0513663060b6ceace58f07777bfbaef6dda1f8bb96d1849109c1ba"} Jan 23 14:53:02 crc kubenswrapper[4775]: I0123 14:53:02.632280 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" event={"ID":"f75714c3-400a-4e4a-b1b4-220a7b426db4","Type":"ContainerStarted","Data":"f0146f8914dc95860df20fbac462375f9cb984677043cf48b9621229f25f5445"} Jan 23 14:53:02 crc kubenswrapper[4775]: I0123 14:53:02.714506 4775 scope.go:117] "RemoveContainer" containerID="b5e598cbf349da815af5db0b22df9dc34e13444bedef413becde0b98162db747" Jan 23 14:53:02 crc kubenswrapper[4775]: E0123 14:53:02.714853 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:53:06 crc kubenswrapper[4775]: I0123 14:53:06.664897 4775 generic.go:334] "Generic (PLEG): container finished" podID="f75714c3-400a-4e4a-b1b4-220a7b426db4" containerID="e95d4dff0e0513663060b6ceace58f07777bfbaef6dda1f8bb96d1849109c1ba" exitCode=2 Jan 23 14:53:06 crc kubenswrapper[4775]: I0123 14:53:06.665009 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" event={"ID":"f75714c3-400a-4e4a-b1b4-220a7b426db4","Type":"ContainerDied","Data":"e95d4dff0e0513663060b6ceace58f07777bfbaef6dda1f8bb96d1849109c1ba"} Jan 23 14:53:06 crc kubenswrapper[4775]: I0123 14:53:06.666028 4775 scope.go:117] "RemoveContainer" containerID="e95d4dff0e0513663060b6ceace58f07777bfbaef6dda1f8bb96d1849109c1ba" Jan 23 14:53:07 crc kubenswrapper[4775]: I0123 14:53:07.679068 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" event={"ID":"f75714c3-400a-4e4a-b1b4-220a7b426db4","Type":"ContainerStarted","Data":"f1a866b28be94125fb7ef2098abfd2da9afbb3547f72a6e0a546f64e476fc02e"} Jan 23 14:53:07 crc kubenswrapper[4775]: I0123 14:53:07.710546 4775 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" podStartSLOduration=6.7105310320000005 podStartE2EDuration="6.710531032s" podCreationTimestamp="2026-01-23 14:53:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 14:53:02.647668998 +0000 UTC m=+2929.642497738" watchObservedRunningTime="2026-01-23 14:53:07.710531032 +0000 UTC m=+2934.705359772" Jan 23 14:53:11 crc kubenswrapper[4775]: I0123 14:53:11.737412 4775 generic.go:334] "Generic (PLEG): container finished" podID="f75714c3-400a-4e4a-b1b4-220a7b426db4" containerID="f1a866b28be94125fb7ef2098abfd2da9afbb3547f72a6e0a546f64e476fc02e" exitCode=2 Jan 23 14:53:11 crc kubenswrapper[4775]: I0123 14:53:11.737502 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" event={"ID":"f75714c3-400a-4e4a-b1b4-220a7b426db4","Type":"ContainerDied","Data":"f1a866b28be94125fb7ef2098abfd2da9afbb3547f72a6e0a546f64e476fc02e"} Jan 23 14:53:11 crc kubenswrapper[4775]: I0123 14:53:11.740591 4775 scope.go:117] "RemoveContainer" containerID="e95d4dff0e0513663060b6ceace58f07777bfbaef6dda1f8bb96d1849109c1ba" Jan 23 14:53:11 crc kubenswrapper[4775]: I0123 14:53:11.741405 4775 scope.go:117] "RemoveContainer" containerID="f1a866b28be94125fb7ef2098abfd2da9afbb3547f72a6e0a546f64e476fc02e" Jan 23 14:53:11 crc kubenswrapper[4775]: E0123 14:53:11.741842 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 10s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-6p9nb_nova-kuttl-default(f75714c3-400a-4e4a-b1b4-220a7b426db4)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" podUID="f75714c3-400a-4e4a-b1b4-220a7b426db4" Jan 23 14:53:18 crc kubenswrapper[4775]: I0123 14:53:18.492093 4775 scope.go:117] "RemoveContainer" containerID="b5e598cbf349da815af5db0b22df9dc34e13444bedef413becde0b98162db747" Jan 23 14:53:18 crc kubenswrapper[4775]: E0123 14:53:18.492756 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:53:22 crc kubenswrapper[4775]: I0123 14:53:22.713683 4775 scope.go:117] "RemoveContainer" containerID="f1a866b28be94125fb7ef2098abfd2da9afbb3547f72a6e0a546f64e476fc02e" Jan 23 14:53:23 crc kubenswrapper[4775]: I0123 14:53:23.546688 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" event={"ID":"f75714c3-400a-4e4a-b1b4-220a7b426db4","Type":"ContainerStarted","Data":"f973f7a626434d8012e82e5f3a84a0eb7f802f7de6e71a15c7f64d93c61ca25c"} Jan 23 14:53:28 crc kubenswrapper[4775]: I0123 14:53:28.593680 4775 generic.go:334] "Generic (PLEG): container finished" podID="f75714c3-400a-4e4a-b1b4-220a7b426db4" containerID="f973f7a626434d8012e82e5f3a84a0eb7f802f7de6e71a15c7f64d93c61ca25c" exitCode=2 Jan 23 14:53:28 crc kubenswrapper[4775]: I0123 14:53:28.594095 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" event={"ID":"f75714c3-400a-4e4a-b1b4-220a7b426db4","Type":"ContainerDied","Data":"f973f7a626434d8012e82e5f3a84a0eb7f802f7de6e71a15c7f64d93c61ca25c"} Jan 23 14:53:28 crc kubenswrapper[4775]: I0123 14:53:28.594144 4775 scope.go:117] "RemoveContainer" containerID="f1a866b28be94125fb7ef2098abfd2da9afbb3547f72a6e0a546f64e476fc02e" Jan 23 14:53:28 crc kubenswrapper[4775]: I0123 14:53:28.594953 4775 scope.go:117] "RemoveContainer" containerID="f973f7a626434d8012e82e5f3a84a0eb7f802f7de6e71a15c7f64d93c61ca25c" Jan 23 14:53:28 crc kubenswrapper[4775]: E0123 14:53:28.595408 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-6p9nb_nova-kuttl-default(f75714c3-400a-4e4a-b1b4-220a7b426db4)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" podUID="f75714c3-400a-4e4a-b1b4-220a7b426db4" Jan 23 14:53:29 crc kubenswrapper[4775]: I0123 14:53:29.713987 4775 scope.go:117] "RemoveContainer" containerID="b5e598cbf349da815af5db0b22df9dc34e13444bedef413becde0b98162db747" Jan 23 14:53:29 crc kubenswrapper[4775]: E0123 14:53:29.714754 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:53:43 crc kubenswrapper[4775]: I0123 14:53:43.727490 4775 scope.go:117] "RemoveContainer" containerID="f973f7a626434d8012e82e5f3a84a0eb7f802f7de6e71a15c7f64d93c61ca25c" Jan 23 14:53:43 crc kubenswrapper[4775]: I0123 14:53:43.728215 4775 scope.go:117] "RemoveContainer" containerID="b5e598cbf349da815af5db0b22df9dc34e13444bedef413becde0b98162db747" Jan 23 14:53:43 crc kubenswrapper[4775]: E0123 14:53:43.728558 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:53:43 crc kubenswrapper[4775]: E0123 14:53:43.728613 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 20s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-6p9nb_nova-kuttl-default(f75714c3-400a-4e4a-b1b4-220a7b426db4)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" podUID="f75714c3-400a-4e4a-b1b4-220a7b426db4" Jan 23 14:53:54 crc kubenswrapper[4775]: I0123 14:53:54.714505 4775 scope.go:117] "RemoveContainer" containerID="b5e598cbf349da815af5db0b22df9dc34e13444bedef413becde0b98162db747" Jan 23 14:53:54 crc kubenswrapper[4775]: E0123 14:53:54.715322 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:53:57 crc kubenswrapper[4775]: I0123 14:53:57.714507 4775 scope.go:117] "RemoveContainer" containerID="f973f7a626434d8012e82e5f3a84a0eb7f802f7de6e71a15c7f64d93c61ca25c" Jan 23 14:53:58 crc kubenswrapper[4775]: I0123 14:53:58.880279 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" event={"ID":"f75714c3-400a-4e4a-b1b4-220a7b426db4","Type":"ContainerStarted","Data":"c729da8ff3f49f558ed40dd25a653bd1bbbf5df91f14972abf7388a43581a5e1"} Jan 23 14:54:02 crc kubenswrapper[4775]: I0123 14:54:02.927201 4775 generic.go:334] "Generic (PLEG): container finished" podID="f75714c3-400a-4e4a-b1b4-220a7b426db4" containerID="c729da8ff3f49f558ed40dd25a653bd1bbbf5df91f14972abf7388a43581a5e1" exitCode=2 Jan 23 14:54:02 crc kubenswrapper[4775]: I0123 14:54:02.927282 4775 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" event={"ID":"f75714c3-400a-4e4a-b1b4-220a7b426db4","Type":"ContainerDied","Data":"c729da8ff3f49f558ed40dd25a653bd1bbbf5df91f14972abf7388a43581a5e1"} Jan 23 14:54:02 crc kubenswrapper[4775]: I0123 14:54:02.927697 4775 scope.go:117] "RemoveContainer" containerID="f973f7a626434d8012e82e5f3a84a0eb7f802f7de6e71a15c7f64d93c61ca25c" Jan 23 14:54:02 crc kubenswrapper[4775]: I0123 14:54:02.928391 4775 scope.go:117] "RemoveContainer" containerID="c729da8ff3f49f558ed40dd25a653bd1bbbf5df91f14972abf7388a43581a5e1" Jan 23 14:54:02 crc kubenswrapper[4775]: E0123 14:54:02.928666 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-6p9nb_nova-kuttl-default(f75714c3-400a-4e4a-b1b4-220a7b426db4)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" podUID="f75714c3-400a-4e4a-b1b4-220a7b426db4" Jan 23 14:54:06 crc kubenswrapper[4775]: I0123 14:54:06.713976 4775 scope.go:117] "RemoveContainer" containerID="b5e598cbf349da815af5db0b22df9dc34e13444bedef413becde0b98162db747" Jan 23 14:54:06 crc kubenswrapper[4775]: E0123 14:54:06.716079 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:54:15 crc kubenswrapper[4775]: I0123 14:54:15.721277 4775 scope.go:117] "RemoveContainer" containerID="c729da8ff3f49f558ed40dd25a653bd1bbbf5df91f14972abf7388a43581a5e1" Jan 23 14:54:15 crc kubenswrapper[4775]: E0123 14:54:15.722401 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-6p9nb_nova-kuttl-default(f75714c3-400a-4e4a-b1b4-220a7b426db4)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" podUID="f75714c3-400a-4e4a-b1b4-220a7b426db4" Jan 23 14:54:20 crc kubenswrapper[4775]: I0123 14:54:20.714727 4775 scope.go:117] "RemoveContainer" containerID="b5e598cbf349da815af5db0b22df9dc34e13444bedef413becde0b98162db747" Jan 23 14:54:20 crc kubenswrapper[4775]: E0123 14:54:20.717384 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:54:26 crc kubenswrapper[4775]: I0123 14:54:26.714373 4775 scope.go:117] "RemoveContainer" containerID="c729da8ff3f49f558ed40dd25a653bd1bbbf5df91f14972abf7388a43581a5e1" Jan 23 14:54:26 crc kubenswrapper[4775]: E0123 14:54:26.715469 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-6p9nb_nova-kuttl-default(f75714c3-400a-4e4a-b1b4-220a7b426db4)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" podUID="f75714c3-400a-4e4a-b1b4-220a7b426db4" Jan 23 14:54:33 crc kubenswrapper[4775]: I0123 14:54:33.721971 4775 scope.go:117] "RemoveContainer" containerID="b5e598cbf349da815af5db0b22df9dc34e13444bedef413becde0b98162db747" Jan 23 14:54:33 crc kubenswrapper[4775]: E0123 14:54:33.724845 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-4q9qg_openshift-machine-config-operator(4fea0767-0566-4214-855d-ed0373946271)\"" pod="openshift-machine-config-operator/machine-config-daemon-4q9qg" podUID="4fea0767-0566-4214-855d-ed0373946271" Jan 23 14:54:38 crc kubenswrapper[4775]: I0123 14:54:38.714605 4775 scope.go:117] "RemoveContainer" containerID="c729da8ff3f49f558ed40dd25a653bd1bbbf5df91f14972abf7388a43581a5e1" Jan 23 14:54:38 crc kubenswrapper[4775]: E0123 14:54:38.715288 4775 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-manage\" with CrashLoopBackOff: \"back-off 40s restarting failed container=nova-manage pod=nova-kuttl-cell1-cell-delete-6p9nb_nova-kuttl-default(f75714c3-400a-4e4a-b1b4-220a7b426db4)\"" pod="nova-kuttl-default/nova-kuttl-cell1-cell-delete-6p9nb" podUID="f75714c3-400a-4e4a-b1b4-220a7b426db4"